How we can evade, protest, and sabotage today's pervasive digital surveillance by deploying more data, not less—and why we should. With Obfuscation, Finn Brunton and Helen Nissenbaum mean to start a revolution. They are calling us not to the barricades but to our computers, offering us ways to fight today's pervasive digital surveillance—the collection of our data by governments, corporations, advertisers, and hackers. To the toolkit of privacy protecting techniques and projects, they propose adding obfuscation: the deliberate use of ambiguous, confusing, or misleading information to interfere with surveillance and data collection projects. Brunton and Nissenbaum provide tools and a rationale for evasion, noncompliance, refusal, even sabotage—especially for average users, those of us not in a position to opt out or exert control over data about ourselves. Obfuscation will teach users to push back, software developers to keep their user data safe, and policy makers to gather data without misusing it. Brunton and Nissenbaum present a guide to the forms and formats that obfuscation has taken and explain how to craft its implementation to suit the goal and the adversary. They describe a series of historical and contemporary examples, including radar chaff deployed by World War II pilots, Twitter bots that hobbled the social media strategy of popular protest movements, and software that can camouflage users' search queries and stymie online advertising. They go on to consider obfuscation in more general terms, discussing why obfuscation is necessary, whether it is justified, how it works, and how it can be integrated with other privacy practices and technologies.
Click to increase image sizeClick to decrease image size NotesNotes1 http://archigram.westminster.ac.uk/index.php2 http://cannoncat.org3 http://plan9.bell-labs.com/plan9/4 http://mediaarchaeologylab.com5 http://jsmess.textfiles.com6 http://visual6502.org7 http://maemo.org8 http://neo900.org9 http://thrilljockey.com/thrill/Microstoria10 http://www.mit.edu/prz/EN/background/index.html11 Available at, http://v6.cuzuco.com/, among other places.Additional informationNotes on contributorsFinn BruntonFinn Brunton (Ph.D., University of Aberdeen) is an assistant professor in the Department of Media, Culture, and Communication at NYU. His research interests include digital media, particularly hacking and experiments in networked politics; cryptocurrencies and money; and the history of computing.Kevin DriscollKevin Driscoll (Ph.D., University of Southern California) is a postdoctoral researcher at Microsoft Research. His research concerns the popular and political cultures of networked personal computing with special attention to myths about Internet infrastructure.Tarleton GillespieTarleton Gillespie (Ph.D., University of California at San Diego) is an associate professor in the Department of Communication and the Department of Information Science at Cornell University, with an affiliation in the Department of Science and Technology Studies. His current work considers the regulation of online media platforms and its implications for free speech and public discourse. He is also the co-founder of the blog Culture Digitally.
Abstract This chapter takes up moral and ethical arguments for and against obfuscation. Is obfuscation a form of lying, a waste of resources or a kind of pollution, an act of free riding on the willingness of others to disclose data about themselves, or even an attack on a data collection system itself? The authors discuss each of these objections in turn, urging readers to look beyond these immediate challenges toward deeper issues of ethical means and ends, and “informational justice” (drawing on the work of Rawls, in particular). It considers the moral obligations individuals owe, or do not owe, to data collectors, and vice versa. Finally, the authors turn to the distribution of risk in the collection of data, and discuss obfuscation as a strategy for equitably redistributing risk.