When I started my blog I was already aware about the Comment Spam problem and thus enabled a WordPress plugin to prevent comment spam (“did you pass math”). The other day a friend complained that when he wanted to comment on something and forgot to fill out the captcha-field his comment got lost (and pushing the back-button had his browser loose all that he had typed up). And when I was reading through raw apache logs and saw somebody trying to post a comment and apparently not succeeding. So I turned the plugin off and within a day I had 8 spam comments on my blog (which does not have a high pagerank and uses nofollow-links; What’s the gain?)… So I’ll keep it turned on. There!
Spam is an interesting problem, because you have an “adversary” with a lot of resources who will do whatever it takes to get your attention, an email in your inbox or a comment with links on your blog. The more filters we build, even with machine learning, the more sophisticated they become. It will probably be a driving force for classification for some time to come. However, machine learning and filters are very expensive in CPU time and do not scale very well. Sander told me about the email server at their institute having a backlog in emails of 40 Gigabytes, i.e. 40 Gigabytes of emails staying in the spool waiting to be scanned for spam and virii. Given that this server was only serving about 50 users and given that 99% of the email in the spool is probably spam illustrates the problem. Currently (in my opinion) mechanisms like Grey-Listing and such are a better solution simply because they scale better as they exploit “implementation issues” of the spam-software and don’t require the CPU-intensive scan of every email. That is, until the next generation of Spam-bots will adapt to those measures. Build a better spam-filter and somebody will build a better spam.