Mikael Brasholt Skov & Jan Stage (2009): Training software developers and designers to conduct usability evaluations

Behaviour & Information Technology

As I was looking for material on usability testing, I found this article by Skov&Stage. They made an experiment on their introductory course with 234 students (36 teams). In this course, they taught website usability testing for 40h, and then studied how well the students managed in planning, conducting, and interpreting the results of a usability evaluation of an interactive website. The results showed that the students “gained good competence in conducting the evaluation, defining user tasks and producing a usability report, while they were less successful in acquiring skills for identifying and describing usability problems“.

For me, the most interesting part was the table including the variables used in assessing the usability report and the whole process, because it gives good basis for grading the course assignments and serves as a checklist for students as well:

1. Conducting the evaluation, 2. Task quality and relevance, 3. Questionnaires/interviews quality and relevance

4. Test procedure description, 5. Data quality, 6. Clarity of usability problem list, 7. Executive summary, 8. Clarity of report, 9. Report layout

10. Number of identified usability problems, 11. Usability problem categorization, 12. Practical relevance of usability problems, 13. Qualitative results overview, 14. Quantitative results overview, 15. Use of literature, 16. Conclusion, 17. Test procedure evaluation

I don’t especially like the number of identified problems to be a metric for assessing the quality of the evaluation, since the number depends on the level one categorizes and reports the problems, and the quality and the severity of the problems found is more important than their quantity. (In this study, the team/report got maximum points with 17 identified problems). Otherwise, the list seems pretty good in my opinion.