Abstract

Shared understanding of requirements between stakeholders and the development team is a critical success factor for requirements engineering. Workshops are an effective means for achieving such shared understanding. Stakeholders and team representatives can meet and discuss what a planned software system should be and how it should support achieving stakeholder goals. However, some important intended recipients of the requirements are often not present in such workshops: the developers. Thus, they cannot benefit from the in-depth understanding of the requirements and of the rationales for these requirements that develops during the workshops. The simple handover of a requirements specification hardly compensates the rich requirements understanding that is needed for the development of an acceptable system. To compensate the lack of presence in a requirements workshop, we propose to record that requirements workshop on video. If workshop participants agree to be recorded, a video is relatively simple to create and can capture much more aspects about requirements and rationales than a specification document. This paper presents the workshop video technique and a phenomenological evaluation of its use for requirements communication from the perspective of software developers. The results show how the technique was appreciated by observers of the video, present positive and negative feedbacks from the observers, and lead to recommendations for implementing the technique in practice.

Keywords

Notes

Acknowledgments

This work was partially supported by the European Commission (FP7 project FI-STAR, Grant agreement no. 604691) and by the German Federal Ministry of Education and Research (K3 project 13N13548).

Appendix: evaluation questions

Table 25 shows the questions that were used for evaluating the observer’s perception of a workshop video. The questions were contained in a questionnaire that was administered to a respondent after he or she had studied a workshop video in detail.

Table 25

Evaluation questions

About the respondent

QQ1. Do you have experience in building applications like discussed in the workshop? [Yes/No]

Video rating

QQ2. Overall, how satisfied are you with the video? [Opinion Score Scale]

QQ3. Overall, how capable do you feel you are able to implement the discussed solution discussed in the video? [Opinion Score Scale]

QQ4. Overall, how do you judge the use of video recording for communicating requirements to developers? [Strategic Planning Scale]

QQ5. Being a potential developer of the discussed solution, would you use a video for requirements communication? [Yes/No]

Video contents

QQ6. From the eyes of a potential developer of the discussed solution: Which parts of the video do you judge to be the most valuable inputs for development?

QQ7. From the eyes of a potential developer of the discussed solution: Which parts do you judge not to be useful as inputs for development?

QQ8. From the eyes of a potential developer of the discussed solution: Which parts were missing in the video and should have been covered as inputs for development?

Video use

QQ9. From the eyes of a potential developer of the discussed solution: How would you use the video to support your development work?

Recommendations

QQ10. From the eyes of a potential developer of the discussed solution: What should the requirements engineer do differently next time?

QQ11. From the eyes of a potential developer of the discussed solution: What should the filming crew do differently next time?

QQ12. From the eyes of a potential developer of the discussed solution: What documentation would you expect in addition to the video?

Conclusion

QQ13. Any other comment

The question QQ1 was used to understand prior knowledge of the respondent. If learning plays a role in how a workshop video is perceived, prior knowledge of the system discussed in the workshop may affect the answers of the respondent.

The questions QQ2–QQ5 were posed as closed-ended questions for rating the studied workshop video. The idea was to develop a comprehensive judgment of the practice through triangulation. To understand the ratings, all answers had to be complemented with a rationale. QQ2 asked for an all-over-the-board judgment of the workshop video and was used to answer RQ2.1. Such questioning with the opinion score scale is common in quality-of-experience evaluation, e.g., [73]. At the place of the investigator, it is the respondent that decides about the criteria to be used for the judgment. In our case, we identified the criteria used by the respondents by asking them to justify the answers with a rationale. QQ3 was used in earlier requirements communication research [46], thus provides an opportunity to compare the answers. QQ4 was proposed for strategic planning of quality [65] with the explicit notions of good enough and competitiveness. QQ3 and QQ4 were used to answer RQ1.2. QQ5 finally elicits the willingness of the respondents to actually use a video for requirements communication. RQ3.1.

The questions QQ6–QQ8 were posed to understand the strengths and weaknesses of the workshop video when used for requirements communication, thus answering RQ1.3 and RQ2.3. The respondent was asked to take the perspective of a potential developer when answering these questions. To avoid complication of the study, we did not require the respondents to actually develop the solution. Instead, they were asked to use their prior software development experiences to identify plausible strengths and weaknesses. Thus, the answers reflect the reaction of a developer before he or she starts using the video for implementation. The split of the answers into data for RQ1.3 and RQ2.3 was made through content analysis. In the validation of the workshop video with the head designer from the real project, the opinions were based on actual development experience, thus reflecting the reaction of a developer during implementation. Both situations are relevant for the evaluation of workshop videos. The discussion of the validation with the head designer shows the similarities and differences in the two situations.

The questions QQ9–QQ12 were used to answer RQ3 by asking how the respondent would use the video to support development and by asking for recommendations for the improvement of requirements engineering, video filming, and accompanying documentation. Again, the respondents were basing their answers by assuming they may be starting development. The head designer was giving feedback from within development. The discussion of the validation with the head designer shows the similarities and differences of the two situations.