Paper accepted at EMNLP 2025
Our paper, “MuseScorer: Idea Originality Scoring At Scale,” just got accepted at EMNLP, a premier AI conference focused on natural language processing.
Abstract: An objective, face-valid method for scoring idea originality is to measure each idea’s statistical infrequency within a population---an approach long used in creativity research. Yet, computing these frequencies requires manually bucketing idea rephrasings, a process that is subjective, labor-intensive, error-prone, and brittle at scale. We introduce \textsc{MuseScorer}, a fully automated, psychometrically validated system for frequency-based originality scoring. MuseScorer integrates a Large Language Model (LLM) with externally orchestrated retrieval: given a new idea, it retrieves semantically similar prior idea-buckets and zero-shot prompts the LLM to judge whether the idea fits an existing bucket or forms a new one. These buckets enable frequency-based originality scoring without human annotation. Across five datasets (N_participants = 1143, n_ideas = 16,294), MuseScorer matches human annotators in idea clustering structure (AMI = 0.59) and participant-level scoring (r = 0.89), while demonstrating strong convergent and external validity. The system enables scalable, intent-sensitive, and human-aligned originality assessment for creativity research.
Read the full paper here: https://arxiv.org/pdf/2505.16232