Category Archives: Journal article
Norman, D. A., & Verganti, R. (2014). Incremental and radical innovation: Design research vs. technology and meaning change. Design Issues, 30(1), 78-96.
User-centered or human-centered design (HCD) has an iterative cycle of investigation including observation, ideation, rapid prototyping and testing. According to Norman, HCD can only lead to incremental enhancements of the product. This process can be compared to the mathematical procedure … Continue reading
Calvo, R. A., & Peters, D. (2012). Positive computing: Technology for a wiser world. interactions, 19(4), 28-31.
Technology affects our development of wisdom. Wisdom is developed through personal experiences, which are increasingly transformed by computers. As computers are constantly present in our daily lives, human values should be included in the design process. In addition to designing … Continue reading
Clemmensen, T., Hertzum, M., Hornbaek, K., Shi, Q. and Yammiyavar, P. (2009) Cultural cognition in usability evaluation. Interacting with Computers, Vol. 21, No. 3, pp. 212-220.
DOI= 10.1016/j.intcom.2009.05.003 This article discusses on the cultural differences between Eastern and Western people in thinking aloud tests. Eastern people in this paper mean people with background from China or “countries heavily influenced by its culture” , and Western people … Continue reading
Benford, S., Greenhalgh, C., Giannachi, G., Walker, B., Marshall, J., Rodden, T. Uncomfortable User Experience. Communications of the ACM. Vol 56, No 9. 2013
CHI article on the cover of the CACM = must read. Benford et al.’s article is about creating and utilizing uncomfortable user experiences in design. The authors derive examples and ideas from art, media and amusement parks. The main idea … Continue reading
Siegel, D., Sorin, A., Thompson, M., Dray, S. Fine-Tuning User Research to Drive Innovation. Interactions. September-October 2013. pp. 42-49
The article focuses on interesting and difficult theme of innovating based on user-research. “User-centered innovation” is problematic area of design and engineering since traditional user research methods seem to produce basis for incremental improvements instead of new and novel ideas … Continue reading
Klein, G., Calderwood, R., Macgregor, D. Critical Decision Method for Eliciting Knowledge. IEEE Transaction on System, Man, and Cybernetics, vol 19, no 3, (1989)
Klein et al.’s paper describes an interview method that focuses on non-routine events. The method has been developed for studying decision making but it seems to be potentially applicable to UCD/HCI also. The core idea of the method is to … Continue reading
Foelstad, A. and Hornbaek, K. (2010) Work-domain knowledge in usability evaluation: Experiences with Cooperative Usability Testing. The Journal of Systems and Software, Vol. 83, No. 11, pp. 2019-2030.
DOI= 10.1016/j.jss.2010.02.026 Foelstad and Hornbaek studied the use of Cooperative Usability Testing in the development of two work-domain specific systems. As modifications to the original method, they included an interpretation phase after each task, and used task-scenario walkthroughs instead of … Continue reading
Woolrych, A., Hornbaek, K., Froekjaer, E., and Cockton, G. (2011). Ingredients and meals rather than recipes: A proposal for research that does not treat usability evaluation methods as indivisible wholes. International Journal of Human-Computer Interaction, Vol. 27, No. 10, pp. 940-970.
DOI= 10.1080/10447318.2011.555314 Woolrych et al. nicely analyse the state of research and comparisons on usability evaluation methods. Too often, these methods are considered as precisely presented step-by-step procedures that almost automatically produce a list of usability problems regardless of the … Continue reading
Lindgaard, G. and Chattratichart, J. (2007) Usability testing: what have we overlooked?. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI ’07). ACM, New York, NY, USA, pp. 1415-1424.
DOI= 10.1145/1240624.1240839 The studies by Lindgaard and Chattratichart indicate a need to shift the focus from the number of test users to the number of test tasks in usability testing. Lindgaard and Chattratichart analysed the results of several usability teams … Continue reading
Holleran, P.A. (1991) A methodological note on pitfalls in usability testing. Behaviour & Information Technology, Vol. 10, no. 5, pp. 345-357.
DOI:10.1080/01449299108924295 Good usability testing is similar to good empirical research: the use of improper procedures will result in invalid data, and thereby poor validity and reliability. Holleran categorises pitfalls in usability testing into three groups: sampling problems mainly in planning … Continue reading
Orne, M.T. (1962) On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist. Vol. 17, No. 11, pp. 776-783.
Accession Number= 00000487-196211000-00005 Finally I found an article presenting notes on social psychology studies regarding the behavior of people as test participants. Although usability tests are not generally treated as scientific or psychological experiments, many similar phenomenon as in Orne’s … Continue reading
Höysniemi, J., Hämäläinen, P. and Turkki, L. (2003) Using peer tutoring in evaluating the usability of a physically interactive computer game with children. Interacting with Computers, Vol. 15, No. 2, pp. 203-225.
DOI= 10.1016/S0953-5438(03)00008-0 This study used peer tutoring to evaluate an interactive computer game with children. They used either a pair of children or one child at a time to teach the use of the game to another child. This way, … Continue reading
Kennedy, S. (1989) Using video in the BNR usability lab. SIGCHI Bulletin. Vol. 21, No. 2, pp. 92-95.
DOI=10.1145/70609.70624 Co-discovery learning shares many principles with constructive interaction, but in addition, has a list of spesific tasks and includes a reflection on the task difficulty after each task. Sue Kennedy and her colleagues used this method in evaluating various … Continue reading
Trudel, C-I. and Payne, S.J. (1995) Reflection and goal management in exploratory learning. International Journal of Human-Computer Studies. Vol. 42, No. 3, pp 307-339.
DOI= 10.1006/ijhc.1995.1015 In these experiments, Trudel and Payne studied the effect of constraining the number of keystrokes that subjects were allowed to make while they were learning to use a new interactive device. They also tried the effect of having … Continue reading
Trudel, C-I. and Payne, S.J. (1996) Self-monitoring during exploration of an interactive device. International Journal of Human-Computer Studies, Vol. 45, No. 6, pp. 723-747.
DOI= 10.1006/ijhc.1996.0076 Trudel and Payne are interested in how people learn to use interactive devices and how this learning can be supported. Their studies relate to usability testing as they made experiments where “subject explored an unfamiliar interactive device without … Continue reading
Robinson et al. (2000) Diary as dialogue in papermill process control, Communications of the ACM, vol. 43, pp. 65-70
This paper presents an e-diary, that was used to replace a paper diary in a papermill. Entries in the diary constitute of dialogues within and between work shifts. The entries do not call for specific responses but it can evoke … Continue reading
Wenger, Etienne C., Snyder, William M. (2000) Communities of practice: The Organizational Frontier, Harvard Business Review, January-February 2000, pp. 139-145
Communities of practice are defined as follows: “They’re groups of people informally bound together by shared expertise and passion for a joint enterprise”. People in these communities share their expertise and knowledge in free-flowing, creative ways that foster new approaches … Continue reading
Schulte-Mecklenbeck, M. and Huber, O. (2003) Information search in the laboratory and on the Web: With or without an experimenter. Behavior Research Methods, Instruments, & Computers, Vol. 35, No. 2, pp. 227-235.
DOI= 10.3758/BF03202545 The studies of Schulte-Mecklenbeck and Huber focused on the effect of the location of test comparing laboratory settings and uncontrolled settings in locations selected by the users. The users were asked to do risky decision making, and find … Continue reading
Raita, E. and Oulasvirta, A. (2011) Too good to be bad: Favorable product expectations boost subjective usability ratings. Interacting with Computers, Vol. 23, No. 4, pp. 363-371.
DOI= 10.1016/j.intcom.2011.04.002 This article studies the effect of positive or negative priming on the subjective usability ratings after the test. The priming was done with two different versions of product review given to the users before starting with the test … Continue reading