Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In ASSETS. Annual ACM Conference on Assistive Technologies

Teachable object recognizers provide a solution for a very practical need for blind people - instance level object recognition. They assume one can visually inspect the photos they provide for training, a critical and inaccessible step for those who are blind. In this work, we engineer data descriptors that address this challenge. They indicate in real time whether the object in the photo is cropped or too small, a hand is included, the photos is blurred, and how much photos vary from each other. Our descriptors are built into open source testbed iOS app, called MYCam. In a remote user study in (N = 12) blind participants' homes, we show how descriptors, even when error-prone, support experimentation and have a positive impact in the quality of training set that can translate to model performance though this gain is not uniform. Participants found the app simple to use indicating that they could effectively train it and that the descriptors were useful. However, many found the training being tedious, opening discussions around the need for balance between information, time, and cognitive load.

Hong Jonggi, Gandhi Jaina, Mensah Ernest Essuah, Zeraati Farnaz Zamiri, Jarjue Ebrima Haddy, Lee Kyungjun, Kacorri Hernisa

2022-Oct

blind, machine teaching, object recognition, participatory machine learning, visual impairment