ReCog: Supporting Blind People in Recognizing Personal Objects

Dragan Ahmetovic, Daisuke Sato, Uran Oh, Tatsuya Ishihara, Kris Kitani, Chieko Asakawa

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

21 Scopus citations

Abstract

We present ReCog, a mobile app that enables blind users to recognize objects by training a deep network with their own photos of such objects. This functionality is useful to differentiate personal objects, which cannot be recognized with pre-trained recognizers and may lack distinguishing tactile features. To ensure that the objects are well-framed in the captured photos, ReCog integrates a camera-aiming guidance that tracks target objects and instructs the user through verbal and sonification feedback to appropriately frame them. We report a two-session study with 10 blind participants using ReCog for object training and recognition, with and without guidance. We show that ReCog enables blind users to train and recognize their personal objects, and that camera-aiming guidance helps novice users to increase their confidence, achieve better accuracy, and learn strategies to capture better photos.

Original languageEnglish
Title of host publicationCHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
PublisherAssociation for Computing Machinery
ISBN (Electronic)9781450367080
DOIs
StatePublished - 21 Apr 2020
Event2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020 - Honolulu, United States
Duration: 25 Apr 202030 Apr 2020

Publication series

NameConference on Human Factors in Computing Systems - Proceedings

Conference

Conference2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020
Country/TerritoryUnited States
CityHonolulu
Period25/04/2030/04/20

Keywords

  • object recognition
  • photography guidance
  • visual impairment

Fingerprint

Dive into the research topics of 'ReCog: Supporting Blind People in Recognizing Personal Objects'. Together they form a unique fingerprint.

Cite this