Abstract
We present ReCog, a mobile app that enables blind users to recognize objects by training a deep network with their own photos of such objects. This functionality is useful to differentiate personal objects, which cannot be recognized with pre-trained recognizers and may lack distinguishing tactile features. To ensure that the objects are well-framed in the captured photos, ReCog integrates a camera-aiming guidance that tracks target objects and instructs the user through verbal and sonification feedback to appropriately frame them. We report a two-session study with 10 blind participants using ReCog for object training and recognition, with and without guidance. We show that ReCog enables blind users to train and recognize their personal objects, and that camera-aiming guidance helps novice users to increase their confidence, achieve better accuracy, and learn strategies to capture better photos.
Original language | English |
---|---|
Title of host publication | CHI 2020 - Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems |
Publisher | Association for Computing Machinery |
ISBN (Electronic) | 9781450367080 |
DOIs | |
State | Published - 21 Apr 2020 |
Event | 2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020 - Honolulu, United States Duration: 25 Apr 2020 → 30 Apr 2020 |
Publication series
Name | Conference on Human Factors in Computing Systems - Proceedings |
---|
Conference
Conference | 2020 ACM CHI Conference on Human Factors in Computing Systems, CHI 2020 |
---|---|
Country/Territory | United States |
City | Honolulu |
Period | 25/04/20 → 30/04/20 |
Bibliographical note
Funding Information:We would like to thank all the participants who took part in our user study. This work was sponsored in part by Shimizu Corporation and Uptake (Carnegie Mellon University Machine Learning for Social Good fund).
Publisher Copyright:
© 2020 ACM.
Keywords
- object recognition
- photography guidance
- visual impairment