TY - GEN
T1 - VizWiz::LocateIt - Enabling blind people to locate objects in their environment
AU - Bigham, Jeffrey P.
AU - Jayant, Chandrika
AU - Miller, Andrew
AU - White, Brandyn
AU - Yeh, Tom
PY - 2010
Y1 - 2010
N2 - Blind people face a number of challenges when interacting with their environments because so much information is encoded visually. Text is pervasively used to label objects, colors carry special significance, and items can easily become lost in surroundings that cannot be quickly scanned. Many tools seek to help blind people solve these problems by enabling them to query for additional information, such as color or text shown on the object. In this paper we argue that many useful problems may be better solved by direclty modeling them as search problems, and present a solution called VizWiz::LocateIt that directly supports this type of interaction.VizWiz::LocateIt enables blind people to take a picture and ask for assistance in finding a specific object. The request is first forwarded to remote workers who outline the object, enabling efficient and accurate automatic computer vision to guide users interactively from their existing cellphones. A two-stage algorithm is presented that uses this information to guide users to the appropriate object interactively from their phone.
AB - Blind people face a number of challenges when interacting with their environments because so much information is encoded visually. Text is pervasively used to label objects, colors carry special significance, and items can easily become lost in surroundings that cannot be quickly scanned. Many tools seek to help blind people solve these problems by enabling them to query for additional information, such as color or text shown on the object. In this paper we argue that many useful problems may be better solved by direclty modeling them as search problems, and present a solution called VizWiz::LocateIt that directly supports this type of interaction.VizWiz::LocateIt enables blind people to take a picture and ask for assistance in finding a specific object. The request is first forwarded to remote workers who outline the object, enabling efficient and accurate automatic computer vision to guide users interactively from their existing cellphones. A two-stage algorithm is presented that uses this information to guide users to the appropriate object interactively from their phone.
UR - http://www.scopus.com/inward/record.url?scp=77956551517&partnerID=8YFLogxK
U2 - 10.1109/CVPRW.2010.5543821
DO - 10.1109/CVPRW.2010.5543821
M3 - Conference contribution
AN - SCOPUS:77956551517
SN - 9781424470297
T3 - 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010
SP - 65
EP - 72
BT - 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010
T2 - 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, CVPRW 2010
Y2 - 13 June 2010 through 18 June 2010
ER -