Skip to content

For the Love of Bug Bounty

Chris Roberts, suddenly the nation’s most notorious computer security researcher, has had an exciting couple of months, ping-ponging between industry martyr and villain.

In April, FBI agents yanked him off a Boeing 737 as soon as it landed in Syracuse; apparently they weren’t thrilled when Roberts tweeted mid-air, with a smiley face, that he might hack into the plane’s control systems and unleash all the oxygen masks. After hours of questioning, the Feds let him go—but kept his fleet of fancy electronics, including two laptops.

Chris Roberts, a self-described "ethical hacker," was yanked off a Boeing in Syracuse, after he tweeted about hacking it. / Twitter
Chris Roberts, a self-described “ethical hacker,” was yanked off a Boeing in Syracuse. / Twitter

Some in the security community scolded Roberts for joking so cavalierly about his ability to hack planes. Others, though, were troubled by the FBI’s show of power. Roberts, after all, is a self-described “ethical hacker”—i.e., a hacker who sniffs out security holes not to exploit them nefariously, but to recommend they be fixed. He’s been on the information security circuit for years now, promoting his consulting company, One World Labs, and sounding the alarm about the vulnerabilities of commercial aircraft.

In May, as new details emerged, support for Roberts began to dry up. According to an affidavit obtained by Canadian news outlet APTN, his mid-air exploits were hardly a joke. As it turned out, Roberts had once taken over the controls of a passenger-filled commercial jet by connecting his laptop to its inflight entertainment system, or so he’d told the FBI in an earlier interview. Soon enough, headlines were declaring that Roberts had cyber-commandeered a number of planes in the past and had even made one turn “sideways” while in flight, prompting visions of Blue Angels–style barrel rolls.

Roberts’s capabilities—Is he really the Lisbeth Salander of jet hacking?—remain in dispute. (Wired’s Kim Zetter, at least, thinks not.) But his antics have drawn attention to the current best practices of his industry, along with the place of vulnerability research in a world that is increasingly surveilled and securitized.

Manufacturers like Boeing have long admitted that aircraft systems might be vulnerable to hacking, although they have mostly done so in discreet regulatory filings. Their response to Roberts and his fellow alarm-sounders has been, essentially, Thanks, but we have the situation under control.

The infosec community, though, is not so easily deterred.

Publicizing vulnerabilities is considered a standard part of improving security for all—it creates a public pressure for companies and government to respond. And while Boeing is not on board, many other corporations nurture this ecosystem through grants to researchers or through bug bounty programs, which provide cash rewards to those who disclose vulnerabilities.

Of course, this ersatz cash system can get messy fast. Some security researchers feel that companies under-reward them, while others discover vulnerabilities only to sell them to the highest bidder.

The sale of so-called zero-day exploits is part of a larger, thriving grey market. Companies like Vupen, FinFisher, and Hacking Team, many of them based in Europe but with clients around the world, deal in penetration software, spyware, and malware, and have become exemplars of how the security researcher’s toolkit can be put to diabolical use. While some claim to sell only to western countries’ intelligence and police services, and to their allies, it’s not uncommon to see reports of these companies’ software being used against dissidents, journalists, and activists in repressive countries like Ethiopia or Bahrain.

For years, the infosec community has been torn between two factions: those who say that these companies are doing legitimate business and others who claim that they distort the security researcher’s mission. What to do about them—whether software code can be meaningfully regulated, especially when these companies may do business with U.S. intel agencies—has been a problem of a higher order.

That’s where the Bureau of Industry and Security, a little-known division of the Commerce Department, comes in. Last week, the BIS released a new set of proposed rules for implementing the Wassenaar Arrangement, an international export agreement observed since 1996 by dozens of countries, including the United States. Right now, the Wassenaar Arrangement governs the export of things like tanks and missiles. The BIS wants to add certain digital technologies to the list, including what it calls “intrusion software”—e.g., penetration-testing software—and zero-day exploits, which may have the unintentional effect of imperiling the work of bug bounty hunters.

These new rules are widely believed to be a response to the activities of companies like Vupen, whose founder has denounced the BIS’s move. (The Commerce Department has published the rules online and invited sixty days of public comment.) They are also one more step toward classifying certain software products as “dual-use,” which in Wassenaar parlance means that they are considered to be potential weapons.

Many researchers say they support regulation of zero-day exploits but worry that these new rules will inflict harm on legitimate research and that the heavy hitters of the infosec field—like Vupen—will just move their operations further beyond U.S. jurisdiction or continue quietly doing business with western intel agencies or shady foreign regimes.

These critics aren’t entirely wrong, but they miss the larger question, which is not just whether we should regulate potentially harmful software, but whom we should trust to safeguard and improve our balky infrastructure, and how this can be done without becoming more deeply enmeshed in the logic of securitization that, since 9/11, has colonized so much public thinking. Potentially deadly plane hacks like Roberts’s seem to be mostly theoretical, yet the fear of such a catastrophic event has allowed shadowy contractors and intelligence agencies to acquire a great deal of power. Both groups thrive on insecurity, whether real or perceived.

Cyberwar is now the watchword, and meal ticket, of hawks everywhere. But rather than pay to upgrade Amtrak infrastructure—which is failing and killing people, no thanks to any terrorists with laptops—U.S. legislators prefer to fund a bombing campaign against the Islamic State, which poses almost no threat to the United States but plenty to its interests (which happen to be everywhere).

Meanwhile, the NSA and its partners, particularly Britain’s GCHQ, hack into computers and networks around the world. They compromise the privacy and security of millions of people, and they do so with the (immensely profitable) collaboration of traditional defense contractors, such as Boeing and Raytheon, big telecoms, and, yes, rogue security researchers. It’s hardly a coincidence that the BIS proposal contains looser restrictions for selling intrusion software to other Five Eyes countries. Even as the Commerce Department moves forward with rules intended to crack down on foreign companies, it leaves exceptions that favor large U.S. companies and America’s closest intelligence partners.

This kind of threat environment, as a security researcher might call it, is more complicated than good hackers against bad ones. Chris Roberts’ shenanigans make him look like a narcissist at best, but as ZDNet’s Violet Blue notes, they’ve also provoked some worthwhile disagreement about what security means and how best to achieve it.

Perhaps Roberts’s colleagues could learn something from the debate over government surveillance. While the infosec community is freaking out over a set of vague, but likely mild, regulatory proposals, the U.S. Senate failed to pass any surveillance reform, ensuring that Section 215 of the Patriot Act will expire. That may seem like an unearned victory—the Senate is nothing if not procedurally incompetent—but it came about because Edward Snowden’s disclosures provoked public debate, which in turn empowered a ragtag coalition of digital privacy professionals, civil libertarians, anti-war activists, a few tech executives, and Rand Paul’s paranoid libertarian constituency. If security researchers want to be seen as friends of the public, they’ll have to make similar alliances. They could also start by telling us what comes first—security or profit?