What and when to look? Temporal span proposal network for video relation detection

Sangmin Woo, Junhyug Noh, Kangil Kim

Research output: Contribution to journalArticlepeer-review

Abstract

Identifying relations between objects is central to understanding the scene. While several works have been proposed for relation modeling in the image domain, there have been many constraints in the video domain due to challenging dynamics of spatio-temporal interactions (e.g., between which objects are there an interaction? when do relations start and end?). To date, two representative methods have been proposed to tackle Video Visual Relation Detection (VidVRD): segment-based and window-based. The segment-based methods lack temporal continuity on the other hand, window-based scale poorly. To tackle this limitations of typical methods, we propose a novel approach named Temporal Span Proposal Network (TSPN). TSPN tells what to look: it sparsifies relation search space by scoring relationness of object pair, i.e., measuring how probable a relation exist. TSPN tells when to look: it simultaneously predicts start-end timestamps (i.e., temporal spans) and categories of the all possible relations by utilizing full video context. These two designs enable a win-win scenario: it accelerates training by 2× or more than existing methods and achieves competitive performance on two VidVRD benchmarks (ImageNet-VidVRD and VidOR). Moreover, comprehensive ablative experiments demonstrate the effectiveness of our approach.

Original languageEnglish
Article number129503
JournalExpert Systems with Applications
Volume297
DOIs
StatePublished - 1 Feb 2026

Bibliographical note

Publisher Copyright:
© 2025 Elsevier Ltd

Keywords

  • Multi object tracking
  • Proposal network
  • Relationship detection
  • Video understanding

Fingerprint

Dive into the research topics of 'What and when to look? Temporal span proposal network for video relation detection'. Together they form a unique fingerprint.

Cite this