Abstract
A common way to describe requirements in Agile software development is through user stories, which are short descriptions of desired functionality. Nevertheless, there are no widely accepted quantitative metrics to evaluate user stories. We propose a novel metric to evaluate user stories called instability, which measures the number of changes made to a user story after it was assigned to a developer to be implemented in the near future. A user story with a high instability score suggests that it was not detailed and coherent enough to be implemented. The instability of a user story can be automatically extracted from industry-standard issue tracking systems such as Jira by performing retrospective analysis over user stories that were fully implemented. We propose a method for creating prediction models that can identify user stories that will have high instability even before they have been assigned to a developer. Our method works by applying a machine learning algorithm on implemented user stories, considering only features that are available before a user story is assigned to a developer. We evaluate our prediction models on several open-source projects and one commercial project and show that they outperform baseline prediction models.
Original language | English |
---|---|
Pages (from-to) | 231-248 |
Number of pages | 18 |
Journal | Requirements Engineering |
Volume | 27 |
Issue number | 2 |
DOIs | |
State | Published - 1 Jun 2022 |
Keywords
- Agile software development
- Machine learning
- Requirements
- User story
ASJC Scopus subject areas
- Software
- Information Systems