Our Presentations at Koli Calling 2012
As has become a tradition, our group had a strong presence at the recent Koli Calling conference. The papers from the conference are now available in the ACM Digital Library. Here are our papers, with brief comments:
- Ville Karavirta, Juha Helminen, Petri Ihantola: A mobile learning application for Parsons problems with automatic feedback
Ville, Juha, and Petri extended their work on Parsons problems to the mobile side. Incidentally, the non-mobile version of their js-parsons project is now being used in the interactive online version of How to Think Like a Computer Scientist.
In this paper, we present a tool that facilitates the learning of programming by providing a mobile application for Parsons problems. These are small assignments where learners build programs by ordering and indenting fragments of code. Parsons problems are well-suited to the mobile context as the assignments form small chunks of learning content that individually require little time to go through and may be freely divided across multiple learning sessions. Furthermore, in response to previous analysis of students using a web environment for Parsons problems, we describe improvements to the automatic feedback given in these assignments.
Teemu presented a paper based on his master’s thesis, in which he explored beginner programmers’ misconceptions as a part of a project to improve our visualization tool UUhistle:
Visual program simulation (VPS) is a form of interactive program visualization in which novice programmers practice tracing computer programs: using a graphical interface, they are expected to correctly indicate each consecutive stage in the execution of a given program. Naturally, students make mistakes during VPS; in this article, we report a study of such mistakes. Visual program simulation tries to get students to act on their conceptions; a VPS-supporting software system may be built so that it reacts to student behaviors and provides feedback tailored to address suspected misconceptions. To focus our efforts in developing the feedback given by our VPS system, UUhistle, we wished to identify the most common mistakes that students make and to explore the reasons behind them. We analyzed the mistakes in over 24,000 student-submitted solutions to VPS assignments collected over three years. 26 mistakes stood out as relatively common and therefore worthy of particular attention. Some of the mistakes appear to be related to usability issues and others to known misconceptions about programming concepts; others still suggest previously unreported conceptual difficulties. Beyond helping us develop our visualization tool, our study lends tentative support to the claim that many VPS mistakes are linked to programming misconceptions and VPS logs can be a useful data source for studying students’ understandings of CS1 content.
Jan’s has recently had the opportunity to study how professional programmers behave during training:
In this paper, we describe the research part of a collaboration between a large telecommunications company and Aalto University built around a programming course arranged by Aalto University for the company’s software developers. This research involved examining several of the software developers’ understandings of matters related to software development and quality in both a learning and a professional development context. The study uses qualitative analysis of questionnaires and interviews to produce descriptions of how the developers understood the software development process and testing and the information sources and needs they had when debugging.
Ahmad and the group leaders followed up on their ongoing project to build an instrument that classifies student-submitted algorithm implementations on the basis of
the roles of variables that are present in the implementation.
Computing educators often rely on black-box analysis to assess students’ work automatically and give feedback. This approach does not allow analyzing the quality of programs and checking if they implement the required algorithm. We introduce an instrument for recognizing and classifying algorithms (Aari) in terms of white-box testing to identify authentic students’ sorting algorithm implementations in a data structures and algorithms course. Aari uses machine learning techniques to classify new instances. The students were asked to submit a program to sort an array of integers in two rounds: at the beginning of the course before sorting algorithms were introduced, and after taking a lecture on sorting algorithms. We evaluated the performance of Aari with the implementations of each round separately. The results show that the sorting algorithms, which Aari has been trained to recognize, are recognized with an average accuracy of about 90%. When considering all the submitted sorting algorithm implementations (including the variations of the standard algorithms), Aari achieved an overall accuracy of 71% and 81% for the first and second round, respectively.
In addition, we analyzed the students’ implementations manually to gain a better understanding of the reasons of failure in the recognition process. This analysis revealed that students have many misconceptions related to sorting algorithms, which results in problematic implementations that are more inefficient compared with those of standard algorithms. We discuss these variations along with the application of the tool in an educational context, its limitations and some directions for future work.
Finally, Lasse had an engaging presentation and poster of his edutainment project, DSAsketch:
Visualizations are often used to teach abstract concepts like algorithms and data structures. In this poster, we present a drawing game, DSAsketch, that can be used to practice a wide range of concepts related to data structures and algorithms. During the game, the players are viewing, constructing and presenting their own visualizations. Thus, players are required to engage with the visualizations in many different ways.