Description
In the paper “Predicting ConceptNet Path Quality Using Crowdsourced Assessments of Naturalness”, human annotators were employed to choose the more natural path from random path pairs, in order to derive latent naturalness of each path. However, we believe this approach faces the following challenges with data quality: (1) arbitrary pairwise path comparisons lack clear distinctions, and (2) comparison of similarly natural, or similarly unnatural pairs adds noise.
By refining the methodology to resolve these quality concerns, introducing more fine-grained indicators of naturalness, and substituting human annotators with LLM-based evaluators, we expect to achieve results that better fulfill the original aims of the study.