Third-party blog analytics on the server side (and what it has to do with visual feedback and automated assessment)

We, just like most bloggers, are curious about statistics. How many you are, where you are from, etc. To fulfill our curiosity, we wanted to use Google analytics to get some information about the readers of this blog – or at least find out when someone finds his or her way here. Unfortunately, this was not an easy task. Google analytics is based on adding a small JavaScript snippet to pages and Aalto blogs doesn’t allow users to do that. Yet if you view the source of this page, you see an analytics snippet; why’s that? It turns out that, yes, administration has its own analytics attached to every blog. But no, bloggers won’t get their accounts added to the analytics so that they could see the statistics – although some screenshots are possible by emailing the IT services. A silly setup, I would say.

What we did and what other Aalto bloggers can also do is to set up pixel tracking combined with server side Google Analytics. This does not give us all the statistics that client-side tracking could provide, but at least we get a visitor count and geo locations for every page we track. The small wrapper we wrote on top of php-ga is available at github.

How is this related to Learning + Technology?

Previously, we have successfully used a similar image injection approach to add visualizations to a web-based automated-assessment platform that gives feedback on programming exercises:

Petri Ihantola, Ville Karavirta, and Otto Seppälä (2011). Automated Visual Feedback from Programming Assignments. In: Proceedings of the Sixth Program Visualization Workshop. Darmstadt, Germany, pp. 87-95. (pdf)

The idea presented in the paper is that textual feedback originating from, say, descriptions of the assertions in unit tests, may be enriched with HTML. We can make feedback visual or even interactive – without changes to the underlying assessment platform. The technique is discussed in more detail in my dissertation.

In the paper, we report a study in which we had some tightly specified programming assignments that we assessed based on a comparison of object graphs: student submission vs. model. If differences were found, we visualized expected and actual object graphs side by side and highlighted the differences with the (now deprecated) graphviz support of image charts. This API allows us to fetch images from URLs so that the URL itself defined the graph to be returned in graphviz’s dot notation. For example, view the sources to find out how the following graph is created:

Unfortunately, this experimental feature of the API is deprecated, and sooner or later you won’t be able to see the visualization above this paragraph.

It would be nice to hear your comments on how to use automated assessment “creatively”.