Tag Archives: evaluation
Varsaluoma, J., & Sahar, F. (2014, October). Usefulness of long-term user experience evaluation to product development: practitioners’ views from three case studies. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (pp. 79-88). ACM.
Long-term UX and longitudinal research approach problems causing frustration for expert users. This article investigates how practitioners in companies utilize results from long-term UX evaluations and approaches the usefulness from practitioners’ perspective. According to previous research it is encouraged to … Continue reading
GOULD, J.D. AND LEWIS, C. 1985. Designing for usability: key principles and what designers think. Communications of the ACM 28, 3, 300-311
Legends of UCD Gould and Lewis reported their famous Key principles for Design. These were later extended and adapted to most of our standards etc. I’d say a truly seminal work, and a solid start for any historic reminiscence. Three … Continue reading
Dicks, R.S. (2002) Mis-usability: on the uses and misuses of usability testing. In Proceedings of the 20th annual international conference on Computer documentation (SIGDOC ’02). pp. 26-30
DOI=10.1145/584955.584960 http://doi.acm.org/10.1145/584955.584960 Dicks discusses on the problems that he has noticed in testing practices and in teaching usability testing. At first, he reminds what characteristics are requited to call some testing a usability test. Dumas&Redish (1994; see references from the … Continue reading
Greenberg, S. & Buxton, B. (2008) Usability Evaluation Considered Harmful (Some of the Time). In CHI 2008 Proceedings, pp. 111-120.
Greenberg and Buxton have listed in their paper a number of problems related to usability testing, and situations where usability evaluation can even be harmful ”if naively done ’by rule’ rather than ’by thought’”. Their main message is: ”the choice … Continue reading
Wixon, D. (2011) Measuring fun, trust, confidence, and other ethereal constructs: It isn’t that hard. Interactions Vol. 18 no 6.
In his article, Wixon argues that everything can be measured, but something about the object of measurement is always lost. Therefore, we need multiple measurements to get more accurate measures of the real thing. Still, the most important thing in … Continue reading
Hornbæk, K 2010, Dogmas in the assessment of usability evaluation methods
Hornbæk, K 2010, Dogmas in the assessment of usability evaluation methods, Behaviour & Information Technology, 29, 1, pp. 97-111, http://search.ebscohost.com/login.aspx?direct=true&db=bth&AN=49152358&site=ehost-live Kasper Hornbæk has gathered seven assumptions that seem to hold in most of the assessments and comparisons made to usability … Continue reading
Sengers, P. and Gaver, B. 2006. Staying open to interpretation: engaging multiple meanings in design and evaluation. DIS’06
DOI= http://doi.acm.org/10.1145/1142405.1142422 Sengers and Gaver want to add new perspectives to both design and evaluation of new systems. In addition to, designing systems that “convey a single, specific, clear interpretation of what they are for and how they should be … Continue reading
Andierssen, J.H.E. (2003) Working with Groupware: Understanding and Evaluating Collaboration Technology
Andierssen’s book is a compact yet comprehensive introduction to collaboration technologies and especially on thinking and evaluating their quality from the users’ perspective. In just 150 pages the book covers implications and roles of collaboration technology in work and society, … Continue reading