Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In The Angle orthodontist ; h5-index 34.0

OBJECTIVES : To evaluate the utility and efficiency of four voice-activated, artificial intelligence-based virtual assistants (Alexa, Google Assistant, Siri, and Cortana) in addressing commonly asked patient questions in orthodontic offices.

MATERIALS AND METHODS : Two orthodontists, an orthodontic resident, an oral and maxillofacial radiologist, and a dental student used a standardized list of 12 questions to query and evaluate the four most common commercial virtual assistant devices. A modified Likert scale was used to evaluate their performance.

RESULTS : Google Assistant had the lowest (best) mean score, followed by Siri, Alexa, and Cortana. The score of Google Assistant was significantly lower than Alexa and Cortana. There was significant variablity in virtual assistant response scores among the evaluators, with the exception of Amazon Alexa. Lower scores indicated superior efficiency and utility.

CONCLUSIONS : The common commercially available virtual assistants tested in this study showed significant differences in how they responded to users. There were also significant differences in their performance when responding to common orthodontic queries. An intelligent virtual assistant with evidence-based responses specifically curated for orthodontics may be a good solution to address this issue. The investigators in this study agreed that such a device would provide value to patients and clinicians.

Perez-Pino Anthony, Yadav Sumit, Upadhyay Madhur, Cardarelli Lauren, Tadinada Aditya

2023-Mar-14

Artificial intelligence, Virtual assistants