Archive for January, 2010

Spam Filtering by Learning a Pattern Language

Tuesday, January 26th, 2010

The New Scientist describes a new method for spam detection by learning patterns. This new method exploits the spammers most powerful weapon – the automatic generation of many, similar messages by automated means (i.e., some grammar in a formal language) – and turns it against them. The article reports that a pattern can reliably be learned from about 1000 examples captured from a bot, allowing the method to classify new messages accurately and with zero false positives. This sounds really exciting given my full spam-folder.

However, I’m a bit cautious. The article is a bit sparse on technical details, so I might make some wrong assumptions here. First, zero false positives reported is the discrimination of spam from that particular spam-grammar versus other messages. At least that’s how I understand it. Second, it seems from the article that they only learn from positive examples. Overall the technique sounds to me like they are learning a pattern language. Pattern languages are a class of grammars that overlap with linear and context-sensitive grammars (Chomsky hierarchy). Unfortunately they don’t have a real Wikipedia page so I’ll try to give a bit of background. The closest I can give for an example right now would be regular expressions with back-references. I’m not sure if this is an accurate description for all possible patterns, but it’s close enough for an example.

I don’t know how the specific technique mentioned in the article works in detail, but I’ve learned two things about learning grammars from text: (a) we can’t learn all linear or context-sensitive languages, only all pattern language grammars; (b) learning patterns without negative examples leads to over-generalization really really fast.

While I haven’t worked with learning grammars in a long while, the only algorithm of which I’m aware is the Lange-Wiehagen algorithm (Steffen Lange and Rolf Wiehagen; Polynomial-time inference of arbitrary pattern languages. New Generation Computing, 8(4):361-370, 1991). This algorithm is not a consistent learner, but can learn all pattern languages in polynomial time. There might be better ones available by now, but learning grammars is not that popular in the machine learning community right now. I’m sure there are some other interesting applications besides spam filtering. Maybe it’s time for a revival.

Overall, it sounds like a promising new anti-spam technique, but I’d like to see some more realistic testing done. There are some obvious ways for spammers to make learning these patterns harder, but either way I’m curious – maybe the inventors of this technique discovered a better way to learn patterns? Maybe by using some problem-specific domain knowledge?

Strong profiling is not mathematically optimal for discovering rare malfeasors (on rare event detection)

Sunday, January 10th, 2010

Just in time for the latest Christmas terror scare, I came across an interesting paper: “Strong profiling is not mathematically optimal for discovering rare malfeasors” (William H. Press; PNAS 106(6), p. 1716-1719 www.pnas.org/cgi/doi/10.1073/pnas.0813202106). In the paper, the author investigates whether profiling by nationality or ethnicity can be justified mathematically and tries to answer the question of how much screening must we do, on average, to catch the bad guys in the crowd. Rare events detection is hard as it is, and it’s interesting to see a look from the sampling perspective. It’s an interesting and short read. Long story short, it shows that using an indiscriminate feature like nationality or ethnicity is not optimal (as is any screening at least in proportion to a prior probability) and wastes resources.