In PloS one ; h5-index 176.0
When people have to solve many tasks, they can aggregate diverse individuals' judgments using the majority rule, which often improves the accuracy of judgments (wisdom of crowds). When aggregating judgments, individuals' subjective confidence is a useful cue for deciding which judgments to accept. However, can confidence in one task set predict performance not only in the same task set, but also in another? We examined this issue through computer simulations using behavioral data obtained from binary-choice experimental tasks. In our simulations, we developed a "training-test" approach: We split the questions used in the behavioral experiments into "training questions" (as questions to identify individuals' confidence levels) and "test questions" (as questions to be solved), similar to the cross-validation method in machine learning. We found that (i) through analyses of behavioral data, confidence in a certain question could predict accuracy in the same question, but not always well in another question. (ii) Through a computer simulation for the accordance of two individuals' judgments, individuals with high confidence in one training question tended to make less diverse judgments in other test questions. (iii) Through a computer simulation of group judgments, the groups constructed from individuals with high confidence in the training question(s) generally performed well; however, their performance sometimes largely decreased in the test questions especially when only one training question was available. These results suggest that when situations are highly uncertain, an effective strategy is to aggregate various individuals regardless of confidence levels in the training questions to avoid decreasing the group accuracy in test questions. We believe that our simulations, which follow a "training-test" approach, provide practical implications in terms of retaining groups' ability to solve many tasks.
Shirasuna Masaru, Honda Hidehito
2023