Interesting post over at hunch.net about reviewers bidding for papers in order to shoot them down. Make sure to read the comments… That state of mind of some reviewers might explain why the least-informative and most negative reviews always come with the highest confidence rating in ML conferences (specifically NIPS).
Same thing happened to me just recently with my application to SIGIR.
Since I trace my logs I could see that they even did not bother to try my tool, whereas I stressed that the most powerful part in my Demo presentation is to see how it really works.
This is hilarious:
http://th.informatik.uni-mannheim.de/People/Lucks/reject.pdf
Rejection letters for some of the most influential papers written in Computer Science.