Table of Contents
The Supreme Court is considering whether to adopt a broad reading of the Computer Fraud and Abuse Act that critics say could criminalize some types of independent security research and create legal uncertainty for many security researchers. Voatz, an online voting vendor whose software was used by West Virginia for overseas military voters in the 2018 election, argues that this wouldn’t be a problem.
“Necessary research and testing can be performed by authorized parties,” Voatz writes in an amicus brief to the Supreme Court. “Voatz’s own security experience provides a helpful illustration of the benefits of authorized security research, and also shows how unauthorized research and public dissemination of unvalidated or theoretical security vulnerabilities can actually cause harmful effects.”
As it happens, we covered a recent conflict between Voatz and an independent security researcher in last Thursday’s deep dive on online voting. And others involved in that altercation did not see it the way Voatz did.
The “unauthorized research” Voatz complains about includes a February research paper from MIT that exposed serious security problems with the Voatz app. The lead author of the MIT study, Michael Specter, presented his findings at the prestigious, peer-reviewed Usenix Security Conference last month. And the Voatz Supreme Court brief doesn’t mention that the problems identified by the MIT team were largely confirmed by a computer security firm called Trail of Bits that Voatz itself hired to vet its software.
Voatz’s decision to prolong its spat with the MIT team by weighing in on an unrelated Supreme Court case is something of a head-scratcher. The MIT team only analyzed Voatz’s Android app, not its servers. So it’s hard to see how they could have possibly violated the CFAA.
In a statement to Ars, Voatz said it was “compelled” to file an amicus brief to the Supreme Court because it was mentioned in another amicus brief in the same case. Voatz said that this earlier brief, signed by Specter and a number of other security researchers, had some factual inaccuracies the company wanted to correct.
But Voatz also claimed that its own experience showed that independent security research was unhelpful and even counterproductive. Others argue the Voatz story shows just the opposite: that independent software analysis is essential to uncovering security problems that companies themselves might prefer to keep under wraps.
“I think if you look at the history of security research, the vast majority I guess is what they would consider unauthorized,” Specter told Ars in a Friday phone interview. “Independent security research is important, and if you put terms and conditions on what security researchers can look at and do, you’re inherently limiting their ability to provide solid research and advice to consumers and the market about these products.
“I vehemently do not agree” with Voatz’s take, Specter added.
The amicus briefs were filed in a Supreme Court case that has nothing to do with online voting and isn’t specifically about security research. The case arose after a Georgia police officer named Nathan Van Buren was caught taking a bribe to look up confidential information in a police database. The man paying the bribe had met a woman at a strip club and wanted to confirm that she was not an undercover cop before pursuing a sexual—and presumably commercial—relationship with her.
Unfortunately for Van Buren, the other man was working with the FBI, which arrested Van Buren and charged him with a violation of the CFAA. The CFAA prohibits gaining unauthorized access to a computer system—in other words, hacking—but also prohibits “exceeding authorized access” to obtain data. Prosecutors argued that Van Buren “exceeded authorized access” when he looked up information about the woman from the strip club.
But lawyers for Van Buren disputed that. They argued that his police login credentials authorized him to access any data in the database. Offering confidential information in exchange for a bribe may have been contrary to department policy and state law, they argued, but it didn’t “exceed authorized access” as far as the CFAA goes.
It’s a distinction that has had big implications in other cases. For example, back in 2010, federal prosecutors criminally charged a business called Wiseguy Tickets for automating the purchase and resale of tickets from TicketMaster in violation of TicketMaster’s terms of service. After an initial ruling went against the defendants, they accepted a plea agreement.
In 2011, federal prosecutors charged activist Aaron Swartz under the CFAA for mass downloading paywalled academic articles from the JSTOR database via the MIT network. Swartz was arguably authorized to access MIT’s network and the JSTOR articles, but the volume of downloads violated MIT and JSTOR’s policies. Swartz committed suicide before the courts ruled on his case.
More recently, LinkedIn sued a small analytics firm called hiQ under the CFAA for scraping data from LinkedIn’s website. Last year, the 9th Circuit Court of Appeals ruled for hiQ, holding that violating LinkedIn’s terms of service does not run afoul of the CFAA.
But the 9th Circuit’s reading of the law isn’t shared by all of the other appeals courts. If the case had been heard in some of the other circuits, hiQ could have lost the case. That’s why the Supreme Court took the case: to bring some uniformity to a law that has been interpreted differently in different parts of the country.
The threat to independent security research
Companies making digital devices and software often don’t appreciate having people expose flaws in their products. They frequently have terms of service that ban practices like website scraping and reverse engineering. If violating a product’s terms of service is a criminal act under the CFAA, then important computer security research techniques might become legally hazardous.
The researchers’ brief cites few instances where security researchers have actually been prosecuted, but it mentions a number of cases where they were investigated by law enforcement or threatened with lawsuits by private companies. Some researchers also say they have shied away from certain types of research due to potential legal complications.
“We vehemently disagree with their argument”
This is where Voatz came in. The researchers wrote that Voatz “reported a University of Michigan student to the FBI because the student conducted research into Voatz’s mobile voting app for an undergraduate election security course.”
In its own brief, Voatz disputes that, claiming that it merely reported suspicious activity to its customer—West Virginia’s secretary of state—and that the state notified the FBI. The student, whose name hasn’t become public, does not seem to have been charged with a crime. But authorities apparently did obtain a warrant to search a student’s phone in connection with the incident.
In its own brief, Voatz cites the MIT study as another example of independent research gone wrong. Voatz claims that “once they had identified potential vulnerabilities, the MIT researchers demanded contact information for all of Voatz customers under threat of going immediately to the press.”
Specter says that’s flatly untrue. He says he and his colleagues never asked Voatz for customers’ contact information. Indeed, he says his team had no contact at all with Voatz in the months before releasing their paper. Instead, the researchers contacted the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency with details about their findings. If anyone tried to contact Voatz’s customers, Specter said, it would have been the CISA, not anyone at MIT.
And while Voatz portrays official bug-bounty testing programs as a superior alternative to unauthorized security research, Specter says that Voatz’s bug bounty program wasn’t a viable option at the time Specter began its research.
The version of the Voatz app offered for security testing was supposed to connect to a test server, theoretically allowing researchers to observe the app in action and test the security of its network communications. But when the MIT team downloaded the test version of the app, it didn’t work. Further research led Specter to believe that the servers hadn’t been running at all.
One of Voatz’s legal conditions also made Voatz’s official security testing program a non-starter for the MIT team. The program’s terms required researchers to give the company adequate time to patch a vulnerability before it is released publicly.
“That sounds fine,” Specter told Ars, “but later they say in the same document [Voatz gets] to define what ‘reasonable time’ means. Which effectively means they can take your research and say ‘we’re not going to patch that until this later date.'”
Few people in the computer security community seem to agree with Voatz’s point of view. At the time the MIT team did its research, Voatz’s bug bounty program was administered by a popular security testing platform called HackerOne. But HackerOne took the unusual step of severing ties with Voatz the same month the MIT research came out.
“After evaluating Voatz’s pattern of interactions with the research community, we decided to terminate the program on the HackerOne platform,” HackerOne said in a March statement to CyberScoop. Cyberscoop said it was the first time HackerOne had done this in eight years.
“We vehemently disagree with their argument” HackerOne CEO Mårten Mickos tweeted on Thursday. Casey Ellis, the founder of Bugcrowd, another bug bounty platform mentioned in Voatz’s brief, tweeted the same day that the company “does not agree with the tenets of the Voatz amicus brief.”