2010 May 07 at 13:30
Rachel Greenstadt, Drexel University
Part of the MITACS Speaker Series on Privacy see http://crysp.uwaterloo.ca/mitacs/Speakers.html
The use of statistical AI techniques in authorship recognition (or stylometry) has contributed to literary and historical breakthroughs. These successes have led to the use of these techniques in criminal investigations and prosecutions. Stylometry, however, can also be used to infringe upon the privacy of individuals who wish to publish documents anonymously. Our research demonstrates how various types of attacks can reduce the effectiveness of stylometric techniques down to the level of random guessing and worse. Few have studied the introduction of adversarial attacks and their devastating effect on the robustness of existing classification methods. Our work in this area has shown that non-expert human subjects can defeat three representative stylometric methods simply by consciously hiding their writing style or imitating the style of another author. This talk will also examine the ways in which authorship recognition can be used to thwart privacy and anonymity and how these attacks can be used to mitigate this threat. It will also cover our current progress in establishing a large corpus of writing samples and attack data and the creation of a tool which can aid authors in preserving their privacy when publishing anonymously.
Bio: Rachel Greenstadt joined the faculty of Drexel University as an Assistant Professor of Computer Science in September of 2008. She runs the Privacy, Security, and Automation Laboratory (http://psal.cs.drexel.edu) at Drexel, researching topics at the intersection between artificial intelligence, privacy and security, and human-computer interaction. Prior to this appointment, she served as a Postdoctoral Fellow at Harvard's Center for Research on Computation and Society. She was co-chair of the 2008 workshop on Distributed Constraint Reasoning and will chair the 3rd Workshop on Artificial Intelligence and Security at CCS in 2010.