Monday, June 28, 2004

On gaming program committees

Lance Fortnow has an interesting take on conference acceptance procedures today. He argues:

Two easy ways to improve your paper but lessen your chances of acceptance at a conference: Add more results and simplify your proofs. Adding a result could only increase the usefulness of a paper but program committees see many results in a paper and conclude that none of them could be very strong. One of our students a few years ago had a paper rejected at STOC, he removed one of his two main theorems and won the best student paper award at FOCS.

Given the same theorem, the community benefits from a simple proof over a complicated proof. Program committees look for hard results so if they see a very simple proof, it can count against you.

I agree mostly with his take on how PCs view papers: simple proofs can indeed be looked down upon. The interesting question is this though: since PC members are the same people who, when not on PCs, have this problem, what is it about PC membership that causes their judgement to skew ?

The usual argument is load: PC members in algorithms conferences typically review far more submissions than PC members in any other conference mainly because of two inter-related reasons:

1. Our PCs are small
2. PC members are not permitted to submit papers to the conference

(note that (2) more or less forces (1)).

Or could one argue that it is the right and appropriate thing for PC members to prune papers in this fashion ? And that it is the authors' responsibility to make the best case for their submission in a system which will always be imperfect ? One might think that this reasoning would encourage, rather than discourage, simple proofs, because these are easier to understand and lead to a better exposition in a conference setting.

It seems to me that one reason an elegant proof might be looked down upon in comparison to a more technical, grungy proof is that if the reviewer is not intimately familiar with the area of the paper, they might not appreciate the value of the elegant result, or be aware of how hard it was to achieve such an understanding of a problem etc. This doesn't sound like a problem that can be fixed easily, unless every paper can be reviewed by an expert in that specific area, which seems difficult to manage.

I would like to venture the slightly controversial claim that theory (STOC/FOCS/SODA/etc) committees are not as rigorous in providing feedback and comprehensive reviews as many other conferences. There are many good reasons why this is the case, and I don't think one can fault reviewers who do the best they can under severe load, but the fact remains, and it would be nice to see more discussion of this in business meetings or even in informal forums in the community.

Although this is somewhat removed from the original point about reviews themselves, I feel that feedback itself is a method for ensuring accountability and openness. A reviewer who has to write a detailed explanation of what they like/don't like in a paper will automatically do a more thorough job of reviewing it. Again, this is not a matter of harasssing reviewers: structural changes will have to made in how theory committees are set up to make this practical.
Post a Comment

Disqus for The Geomblog