Large language models (LLMs) can be employed for automating the generation of software requirements from natural language inputs such as the transcripts of elicitation interviews. However, evaluating whether those derived requirements faithfully reflect the stakeholders’ needs remains a largely manual task. We introduce TEXT2STORIES, a task and metrics for text-to-story alignment that allow quantifying the extent to which requirements (in the form of user stories) match the actual needs expressed by the elicitation session participants. Given an interview transcript and a set of user stories, our metric quantifies (i) correctness: the proportion of stories supported by the transcript, and (ii) completeness: the proportion of transcript supported by at least one story. Wesegment the transcript into text chunks and instantiate the alignment as a matching problem between chunks and stories. Experiments over four datasets show that an LLM-based matcher achieves 0.86 macro-F1 on held-out annotations, while embedding models alone remain behind but enable effective blocking. Finally, we show how our metrics enable the comparison across sets of stories (e.g., human vs. generated), positioning TEXT2STORIES as a scalable, source-faithful complement to existing user-story quality criteria.
Text2Stories: Evaluating the alignment between stakeholder interviews and generated user stories
Submitted to ArXiV, 8 October 2025
Type:
Rapport
Date:
2025-10-08
Department:
Data Science
Eurecom Ref:
8489
Copyright:
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in Submitted to ArXiV, 8 October 2025 and is available at :
See also:
PERMALINK : https://www.eurecom.fr/publication/8489