Author Archives: Sirpa
Clemmensen, T., Hertzum, M., Hornbaek, K., Shi, Q. and Yammiyavar, P. (2009) Cultural cognition in usability evaluation. Interacting with Computers, Vol. 21, No. 3, pp. 212-220.
DOI= 10.1016/j.intcom.2009.05.003 This article discusses on the cultural differences between Eastern and Western people in thinking aloud tests. Eastern people in this paper mean people with background from China or “countries heavily influenced by its culture” , and Western people … Continue reading
Isomursu, M., Kuutti, K. and Vainamo, S. (2004) Experience clip: Method for user participation and evaluation of mobile concepts. In Proceedings of the eighth conference on Participatory design: Artful integration: interweaving media, materials and practices (PDC 04), Vol. 1, pp. 83-92.
DOI=10.1145/1011870.1011881 In Experience Clip, a pair of users from the passers by is invited to participate in the evaluation of a mobile application in the use of which moving around is central. They gave the evaluated application to the other … Continue reading
Foelstad, A. and Hornbaek, K. (2010) Work-domain knowledge in usability evaluation: Experiences with Cooperative Usability Testing. The Journal of Systems and Software, Vol. 83, No. 11, pp. 2019-2030.
DOI= 10.1016/j.jss.2010.02.026 Foelstad and Hornbaek studied the use of Cooperative Usability Testing in the development of two work-domain specific systems. As modifications to the original method, they included an interpretation phase after each task, and used task-scenario walkthroughs instead of … Continue reading
Froekjaer, E. and Hornbaek, K. (2005) Cooperative usability testing: complementing usability tests with user-supported interpretation sessions. In CHI ’05 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’05), pp. 1383-1386.
DOI=10.1145/1056808.1056922 Froekjaer and Hornbaek present a usability testing method called Cooperative Usability Testing. It consists of two parts: The first part is an interaction session in which user interacts with the system as in contextual inquiry or in thinking aloud … Continue reading
Woolrych, A., Hornbaek, K., Froekjaer, E., and Cockton, G. (2011). Ingredients and meals rather than recipes: A proposal for research that does not treat usability evaluation methods as indivisible wholes. International Journal of Human-Computer Interaction, Vol. 27, No. 10, pp. 940-970.
DOI= 10.1080/10447318.2011.555314 Woolrych et al. nicely analyse the state of research and comparisons on usability evaluation methods. Too often, these methods are considered as precisely presented step-by-step procedures that almost automatically produce a list of usability problems regardless of the … Continue reading
Lindgaard, G. and Chattratichart, J. (2007) Usability testing: what have we overlooked?. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI ’07). ACM, New York, NY, USA, pp. 1415-1424.
DOI= 10.1145/1240624.1240839 The studies by Lindgaard and Chattratichart indicate a need to shift the focus from the number of test users to the number of test tasks in usability testing. Lindgaard and Chattratichart analysed the results of several usability teams … Continue reading
Holleran, P.A. (1991) A methodological note on pitfalls in usability testing. Behaviour & Information Technology, Vol. 10, no. 5, pp. 345-357.
DOI:10.1080/01449299108924295 Good usability testing is similar to good empirical research: the use of improper procedures will result in invalid data, and thereby poor validity and reliability. Holleran categorises pitfalls in usability testing into three groups: sampling problems mainly in planning … Continue reading
Orne, M.T. (1962) On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist. Vol. 17, No. 11, pp. 776-783.
Accession Number= 00000487-196211000-00005 Finally I found an article presenting notes on social psychology studies regarding the behavior of people as test participants. Although usability tests are not generally treated as scientific or psychological experiments, many similar phenomenon as in Orne’s … Continue reading
Kjeldskov, J., Skov, M.B. and Stage, J. (2004) Instant data analysis: conducting usability evaluations in a day. In Proceedings of the third Nordic conference on Human-computer interaction (NordiCHI ’04). ACM, New York, NY, USA, pp. 233-240.
DOI=10.1145/1028014.1028050 Kjeldskov et al. decided to test if the analysis phase of usability testing could be cut down and thereby cut the costs of testing. They utilized the resources already used in testing, i.e. the moderator and a note taker, … Continue reading
Sawyer, P., Flanders, A. & Wixon, D. (1996) Making a difference – The impact of inspections. In Proceedings of the ACM CHI’96 Conference on Human Factors in Computing Systems. pp. 376-382.
DOI=10.1145/238386.238579 Sawyer, Flanders and Wixon defined a metric called impact ratio to measure the effectiveness of usability evaluation methods. This numerical value presents the proportion of the problems that the development team commits to fix from all the problems found … Continue reading
Hackman G.S. and Biers, D.W. (1992) Team Usability Testing: Are two Heads Better than One? Proceedings of the Human Factors and Ergonomics Society Annual Meeting, October 1992; vol. 36, 16: pp. 1205-1209.
DOI= 10.1177/154193129203601605 Hackman and Biers made studies to compare the performance of a single user alone, a single user with an observer and a pair of users all using the thinking aloud method. Their results showed that the presence of … Continue reading
Höysniemi, J., Hämäläinen, P. and Turkki, L. (2003) Using peer tutoring in evaluating the usability of a physically interactive computer game with children. Interacting with Computers, Vol. 15, No. 2, pp. 203-225.
DOI= 10.1016/S0953-5438(03)00008-0 This study used peer tutoring to evaluate an interactive computer game with children. They used either a pair of children or one child at a time to teach the use of the game to another child. This way, … Continue reading
Kennedy, S. (1989) Using video in the BNR usability lab. SIGCHI Bulletin. Vol. 21, No. 2, pp. 92-95.
DOI=10.1145/70609.70624 Co-discovery learning shares many principles with constructive interaction, but in addition, has a list of spesific tasks and includes a reflection on the task difficulty after each task. Sue Kennedy and her colleagues used this method in evaluating various … Continue reading
O’Malley, C.E., Draper, S.W. & Riley, M.S. (1984) Constructive interaction: A method for studying human-computer-human interaction. In Shackel, B. (Ed.) Human-computer interaction – INTERACT’84. pp. 269-274.
Constructive interaction is a method involving two users at the same time in solving a problem. O’Malley et al. brought this method into the studies of human-computer interaction in the mid 1980’s. In this method, two subjects with comparable expertise … Continue reading
Trudel, C-I. and Payne, S.J. (1995) Reflection and goal management in exploratory learning. International Journal of Human-Computer Studies. Vol. 42, No. 3, pp 307-339.
DOI= 10.1006/ijhc.1995.1015 In these experiments, Trudel and Payne studied the effect of constraining the number of keystrokes that subjects were allowed to make while they were learning to use a new interactive device. They also tried the effect of having … Continue reading
Trudel, C-I. and Payne, S.J. (1996) Self-monitoring during exploration of an interactive device. International Journal of Human-Computer Studies, Vol. 45, No. 6, pp. 723-747.
DOI= 10.1006/ijhc.1996.0076 Trudel and Payne are interested in how people learn to use interactive devices and how this learning can be supported. Their studies relate to usability testing as they made experiments where “subject explored an unfamiliar interactive device without … Continue reading
van den Haak, M.J. and de Jong, M.D.T. (2005) Analyzing the interaction between facilitator and participants in two variants of the think-aloud method. Proceedings of the International Professional Communication Conference, 2005 (IPCC 2005). pp. 323- 327.
DOI= 10.1109/IPCC.2005.1494192 Van den Haak and de Jong (2005) compared the interaction between the test moderator and test user in two different settings: using thinking-aloud method alone and paired user testing that they call constructive interaction test. They analysed parts … Continue reading
Schulte-Mecklenbeck, M. and Huber, O. (2003) Information search in the laboratory and on the Web: With or without an experimenter. Behavior Research Methods, Instruments, & Computers, Vol. 35, No. 2, pp. 227-235.
DOI= 10.3758/BF03202545 The studies of Schulte-Mecklenbeck and Huber focused on the effect of the location of test comparing laboratory settings and uncontrolled settings in locations selected by the users. The users were asked to do risky decision making, and find … Continue reading
Reeves, B. and Nass, C. (1996) The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Chicago, IL, US: Center for the Study of Language and Information; New York, NY, US: Cambridge University Press, 305 p.
Reeves and Nass (1996) made experiments to study if people are polite to computers in the same was as they are polite to humans interviewers, i.e., if they give more positive answers when a computer asks questions about itself than … Continue reading
Sauro, J. and Lewis, J.R. (2009) Correlations among prototypical usability metrics: evidence for the construct of usability. In Proceedings of the 27th international conference on Human factors in computing systems (CHI ’09). ACM, New York, USA, pp. 1609-1618.
DOI= 10.1145/1518701.1518947 To study the correlations between common usability metrics used in typical usability tests, Sauro and Lewis made an analysis focusing on summative usability studies made in practice. They were able to collect data from 90 usability tests in … Continue reading