Link Spam on Wikis: Attack Models and Mitigation

Andrew G. West et al. (see footer), University of Pennsylvania

Much research on abusive behaviors in wiki environments has focused on poorly motivated cases of vandalism. However, a recent line of research at the University of Pennsylvania has instead concentrated on link spam in such environments. Presuming link spammers are well-incentivized (by profit), the authors expected to discover more technically sophisticated and evasive behaviors. Surprisingly, optimal spam behaviors were absent, motivating the authors to propose a novel (and controversial) attack model. After statistical estimation found the attack could prove economically viable a machine-learning classifier was created to address the vulnerability. The feature-rich model autonomously identified 64% of status quo spam at a 0.5% false-positive rate in testing on English Wikipedia. Moreover, the technique has a live implementation that intelligently routes humans to probable link spam.

Example of exploiting a wiki's edit-anywhere semantics
Example exploiting a wiki's edit-anywhere semantics

The initial measurement study and novel attack model were presented at CEAS 2011 (Collaboration, Electronic Messaging, Anti-Abuse, and Spam Conference) in a paper entitled “Link Spamming Wikipedia for Profit” where it won the (co)-best paper award. As the authors note, collaborative and Web 2.0 environments are not a new battleground for spam (as well surveyed) as seen with blog comments, forum spamming, etc. However, wikis have unique properties a spammer might leverage, including: (a) edit-anywhere semantics (see image; blog comments and forums are often append-only), (b) global editing (social networks are often topology-limited), and (c) community-driven mitigation.

To study the phenomenon the researcher’s first assembled a corpus of link spam additions to English Wikipedia. The major takeaways were two-fold. First, Wikipedia link spam is more diverse than that seen in traditional spam domains (i.e., email). In addition to commercial storefronts, there were many social networking links and personal blogs that fail to meet Wikipedia’s notability requirements. Thus, such “spammers” might not even have malicious intent and arrived only at that label only as a consequence of the wiki’s low barrier-to-entry. Second, even dedicated attackers exhibited non-optimal behaviors. At current, spammers seem to rely on subtlety; the hope that adherence to convention will allow a link to persist on an article and extract long-term utility. Instead, Wikipedia’s diligent editor community quickly identifies such violations with the median link spam instance receiving just 6 page views before removal.

Power-law distribution of viewership over articles
Power-law distribution of viewership over articles

This result came as a surprise to researchers who wondered if naive attackers or the strength of Wikipedia’s defenses were to blame. Utilizing their expertise of the community they proposed a novel link spam attack model which exploits several vulnerabilities. In brief, while Wikipedia’s human-defenders are thorough they are inherently latent, creating brief windows of opportunity which can be aggressively exploited. In particular, English Wikipedia’s 85 billion yearly page views follow a power-law distribution over articles (see image) creating high-value targets. Traffic spikes also mirror cultural events. For example, during the 2010 Super Bowl (American football) the article “The Who” (the musical act playing the halftime show) received nearly 150 views per second. Another crucial vulnerability was the ability to autonomously attain privileged accounts, enabling high-value targets to be edited at high rate-limits.

Statistical estimation suggested the attack could prove profitable to its perpetrators (guided by an earlier economic study of email spam), posing a viable threat to the encyclopedia. To this end, a concurrently published paper was authored, entitled “Autonomous Link Spam Detection in Purely Collaborative Environments” which appeared at WikiSym 2011. While certain configuration tweaks can aid in wiki defense the authors ultimately advocate an autonomous spam classifier. Using a link spam corpus (as per the previous study), the project sought to identify features indicative of link spam behaviors. Over 55 features were implemented the most useful being the number of “backlinks” a URL has received. Cross validation showed that 64% of spam could be autonomously detected at a 0.5% false-positive rate (see image).

Recall vs. FP-rate of spam classifier
Recall vs. FP-rate of spam classifier

This success motivated a live implementation of the technique. Although bot approval is still pending (i.e., automatically reverting likely link spam) the probabilities output by the machine-learning model hold enormous potential for human reviewers. Rather than searching for link spam using brute-force strategies these probabilities can intelligently route humans to likely spam instances. To achieve this in practice edits are managed in a server-side priority queue and fed to the client-side GUI STiki.

While these initial results are encouraging work remains to secure the infrastructure against the most determined of attackers. Moreover, the discussed study has focused only on attack/defense for high-traffic and well-maintained wiki installations. Distinct from these environments are the many low-traffic or abandoned wikis being targeted by blackhat software (e.g., XRumer). Understanding the economics and operation of such spam campaigns may prove an important asset as the battle against Web 2.0 spam becomes increasingly complex.

For more, see our full papers:
(1) Link Spamming Wikipedia for Profit
(2) Autonomous Link Spam Detection in Purely Collaborative Environments

Andrew G. West, University of Pennsylvania
Avantika Agrawal, University of Pennsylvania
Phillip Baker, University of Pennsylvania
Jian Chang, University of Pennsylvania
Brittney Exline, University of Pennsylvania
Krishna Venkatasubramanian, University of Pennsylvania
Oleg Sokolsky, University of Pennsylvania
Insup Lee, University of Pennsylvania

About the author


Andrew G. West is a Research Scientist at Verisign Labs in Reston, VA. His Ph.D. work at the University of Pennsylvania focused on security and privacy issues in open collaboration applications.

View all posts