Hornbæk, K 2010, Dogmas in the assessment of usability evaluation methods

Bookmark and Share

Hornbæk, K 2010, Dogmas in the assessment of usability evaluation methods, Behaviour & Information Technology, 29, 1, pp. 97-111, http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=49152358&site=ehost-live

Kasper Hornbæk has gathered seven assumptions that seem to hold in most of the assessments and comparisons made to usability evaluation methods (UEMs). These assumptions, that Hornbæk calls dogmas, are briefly (some examples of the problems in the brackets):

  1. The number of found problems is used to rank and compare the methods
    (despite their generality, type, clarity or even validity)
  2. Matching problem descriptions is straightforward
    (the process of matching the found problems is poorly documented in the assessments or missing, and may generalize the problems too much, or bee on too precise level, so that the overlaps between the findings of different methods are too big or to small)
  3. Usability evaluation is so well instructed that finding usability problems is straightforward
    (the affect of evaluators’ expertise is often forgotten, and the issue that some problems are found as a side-effect while conducting an evaluation; not because of the method used)
  4. Individual usability problem used as the unit of analysis
    (this is a bit related to the first dogma, but focuses more on the “big picture” that the found problem sets draw instead of individual problems)
  5. Evaluations are assessed in isolation from design
    (so that the impact of the evaluations in further development of the system is not assessed)
  6. A single best UEM exists
    (most of the assessments conclude their report by recommending one method over the others ignoring the context and goals of the evaluation, instead of assessing the results of small sets of methods that are quite often used together in practice)
  7. Usability problems are real
    (in many assessments, the found usability problems are considered to be real problems that the real users would face in their tasks. In some assessments, the results of usability tests are taken as the baseline for comparing the other methods)

The list is quite well gathered from several studies and assessments, and is recommended to read through if one is going to assess different usability evaluation methods. Unfortunately for me, the paper seems to focus more on usability inspections methods than user testing – but most of the issues apply anyway.

Posted by Sirpa

This entry was posted in Journal article and tagged , . Bookmark the permalink.

Leave a Reply