We must reform CHI or start an alternative
Many of the challenges reported by sciences are also increasingly faced by the multidisciplinary communities that form HCI. These challenges include funding, poorly designed studies, missing replications, broken peer-review, paywalling, poor communication, and stressful careers . Is the HCI community responding to these challenges? The CHI team is dedicated to developing the conference and has introduced many changes recently. However, there are some issues that seem to change slowly. Here’s a summary of ten criticisms that I tend to agree with:
1) CHI has met the limits to growth. Too many submissions, too many tracks, too many presentations, too crazy. Simply: the proportion of interesting and important papers does not grow as quickly as the number of submissions. As Stanislaw Lem wrote, it is easier to find 1 interesting paper among 1,000 than to find 1,000 among 1,000,000. I’m afraid that CHI may end up with a large chunk of sterile, nice-to-know papers that dilute the conference and dilute our field. The Best Paper system has not changed this. I am not suggesting that we can improve the field simply by reducing the number of papers published at CHI. I am suggesting that making CHI more selective would have second-order effects via affecting goal-setting and priorities and by supporting shared attention on the most important topics.
2) CHI incentivizes novelty and quantity over rigorous, principled and boring research: For instance, CHI’s guide to successful submissions states that “Novelty is highly valued at CHI”. And this is exactly what lessons learned from other fields should warn us against: “As long as the incentives are in place that reward publishing novel, surprising results, often and in high-visibility journals above other, more nuanced aspects of science, shoddy practices that maximise one’s ability to do so will run rampant.” 
3) The peer-review system is at the deep end. The most relevant reviewers are ACs, and the reviewer pool has been half-empty for a long time. CHI has introduced numerous improvements, such as desk rejects, early 2ACing, and the rebuttal phase. While reducing load and improving quality, these have also tended to increase workload and not fix the underlying problem: reviewing is too much like a lottery. What could be done? One thing to try out is to publish reviews with names. NIPS, for example, does this. Leaving a permanent non-anonmized record might help.
Exorbitant High participation fees costs. The new normal is a registration fee of 1,200-1,400 USD. Add travel and overprized accommodation to that. And these prices keep rising!** Compare this to NIPS, which asks 400 USD and boldly states: “This is the 13th year in which we have not increased our fees.”
** I was wrong here, as Jofish Kaye kindly pointed out: “Registration costs have been the same at CHI since 2008: $800 for the 4-day early member registration (which is the most popular option), and $400 for the student 4-day early member registration.” (Nonetheless, total costs of a conference travel are high.)
5) Unacceptable open access policy: ACM is not committed to an acceptable Open Access policy. European funding agencies, for example, insist on choosing the open access option. However, ACM asks considerable open access fees on a per paper basis. Now, given that ACM only releases those papers, but the rest are paywalled, the net effect is that European taxpayers support an American-based organization but the majority of research remains behind paywalls. If ACM does not want to change, the Vox article suggested a radical move: “One radical step would be to abolish for-profit publishers altogether and move toward a nonprofit model.” 
6) US-centeredness: If you look at the places where the conference and the PC meetings are held, and compare that to the countries that contribute most papers and ACs, you’ll spot an annoying discrepancy: The venues have been about three-fourths in North America, although roughly half or more of contributors are from elsewhere. In addition to the issue of unrepresentativeness and unfairness, many feel that US/Canada is running out of interesting places. San Jose was a great conference but perhaps the least interesting city to host CHI. And Denver, really? Seoul and Paris, to me, were among the best.
7) An overpacked program that does not foster intellectual debate. The conference offers very little value in terms of genuine debate that promotes intellectual growth. Let’s say you get lucky and get your full paper in. What happens next is this: You step on the podium, do your 17 minutes of a talk, then answer 2-3 mostly random questions, and you’re done. That’s it! There is no real discussion about the work, because there is no room for such. The organizers have been maximizing the growth of attendance at the expense of intellectual growth. Workshops do not solve this, naturally, because they target earlier stages of research. One option to consider, similar to e.g. computer vision, is to divide papers into oral vs. poster presentations, or long vs. short presentation.
8) No journal. Yes, I can publish a paper in, say, TOCHI, and come to CHI to present it. That’s splendid. But why not the other way around? UbiComp made a bold move toward a VLDB-like hybrid where papers can be submitted to journal at any time, and accepted papers get to be presented at the annual conference. I’m afraid that the reason that this might not happen at CHI, simply because there are too many papers and their archival value is becoming debatable.
9) Stressful once-per-year deadline. This is not only unnecessarily stressful, but it incentivizes short-term planning. If you have to choose a research problem that can be solved in 8 months versus another that takes 2-3 years, which one would you pick if you are under pressure to advance your career by churning out CHI papers? Worse, the risk of “losing one year” may incentivize authors to bloat their claims about contributions.
10) Too many poorly designed studies. While not all papers need to have a study, my sense is that in those that do statistical power tends to be low. Too often studies are designed to confirm predisposed ideas instead of critically testing them. There are many delightful exceptions each year of course. But something needs to be done to improve the quality of empirical evidence for our claims. And I feel (but have no evidence) that there may be a structural flaw: some subcommittees may be better prepared and have higher standards in this regard than others. However, these subcommittees are out of touch with each other.
None of this is new. These issues have been raised at PC meetings, in hallways, in a recent discussion about EuroCHI, and in CHI Meta. Most researchers who I know agree with at least a subset of these issues. If we take HCI seriously, as we should, we are obliged to discuss improvements. I am personally concerned about points 2, 5, and 7.
There are roughly three options now: 1) reform CHI, 2) improve the other (specialized) HCI conferences to let out steam from CHI, or 3) establish a new general-purpose conference following an alternative model. Discussing which road is the best is beyond this post. However, CHI has shown extraordinary ability to develop itself over the years, and none of the issues is beyond its remit. While I wish these changes happen by internal reform – CHI still is my favorite conference – if changes are too slow or bear no results, we should not be afraid of trying out alternative models.
Post scriptum (Sep 29): Many thanks to numerous commenters, here in this blog, in Facebook, and to those who emailed me. The post has made rounds in social media and been even attended to by our friends higher up in the CHI organization. Thank you for listening. To sum up, the feedback that I got can be divided into “good news” and “bad news”. Let’s start with the bad news: 1) According to a bibliometric analysis, a big proportion of CHI papers gather a relatively low number of citations , for example when compared to NIPS; 2) The most common sample size in CHI studies is only 12 , which – needless to say – is associated with unacceptably low statistical power. 3) In the UK, in the national research assessment, the “value” of a CHI paper has already been dropping in relation to other CS conferences. Then the good news: A good number of the issues have been challenged, such as the claim that participation fees are raising (fixed, see above), and my claim that it’s US-centered (this is more balanced over all SIGCHI conferences and possibly improving), and my claims that “boring” research is thwarted and there being no intellectual debate (see Jeffrey’s comment below). I was also pointed out that CHI is actively discussing its position regarding open science, and it has opened the proceedings for a limited time after a conference. While not a true open policy (perpetuity), they are acting on this, which is great. Even better, I learned that CHI may be setting up a steering committee that might attend to things like this. That’s great. What next? We should get to the root of these issues and discuss what we can do to them. Here’s a good starting point for that end: http://rsos.royalsocietypublishing.org/content/royopensci/3/9/160384.full.pdf
 Lexing Xie: Visualizing citation patterns of CS conferences: http://cm.cecs.anu.edu.au/post/citation_vis/
 Kelly Caine, Proc. CHI’16: http://dl.acm.org/citation.cfm?id=2858498