Threat researchers on the cutting edge of cybersecurity have a certain kind of drive — almost a relentless need — to get into the attacker’s mind, solve the “unsolvable” challenge and expose emerging attack techniques. So, it’s not every day these elite researchers come together to share the secrets to their success, and it’s even less common to hear about their failures. But that’s exactly what happened at the inaugural INTENT Security Research Summit last month.
In one of the most popular sessions of the virtual event moderated by Erez Yalon, head of research at Checkmarx, security researchers from Intezer Labs, Claroty, CyberArk Labs and Palo Alto Networks got candid about times when things didn’t go exactly as planned, what they learned from these “spectacular screwups” and how their stories can benefit the global research community at large.
Never assume
One skipped step in a protocol stack vulnerability research project taught Sharon Brizinov, principal, vulnerability research, at Claroty, to never assume that which is incapable of proof. He shared how making one erroneous assumption led to a foiled project, pushback from a vendor who couldn’t reproduce the exploit he developed, and a close examination of existing research processes and procedures to make sure similar mistakes didn’t happen again.
Eran Shimony, senior vulnerability researcher at CyberArk Labs, recounted a time when he uncovered a supposed bug as part of his ongoing research on local privilege escalation vulnerabilities. After a week of reverse engineering and developing the exploit, he responsibly disclosed the flaw to the vendor. It was only after the vendor came back a few weeks later saying they were having difficulty running the exploit that he realized he had inadvertently designed it using admin privileges, making those “very cool bugs in the kernel” much less of a security issue.
Normalize failures
“Even though you never read about researchers’ failures on Reddit, Twitter and in other research publications, they are there,” said Brizinov. And not sharing these failures can create negative ripple effects such as survival bias, when research gets distorted because less-than-stellar results are overlooked or omitted altogether, he noted.
Sure, it can be humbling to admit your mistakes, but research is all about trial and error. “We all fail on a daily basis; it’s part of the job,” said Ari Eitan, VP of research, Intezer Labs. The more comfortable security researchers become with sharing their mistakes, the less they’ll be viewed as “mistakes” at all.
“The more we see others going through similar struggles — and experiencing similar failures — the more encouraged we’ll be to keep working hard to achieve our goals,” echoed Shimony.
Beat to the punch
Eitan recalled a time his team believed they had discovered a new ransomware sample. They dug deeper and ultimately decided to publish their research. After all, “there’s no benefit to having intel and keeping it to yourself,” he said, “But we took our time.” Little did they know another research team was working on the same thing. “Just two hours before our research launched, another vendor published findings on the exact same ransomware sample, just with a different name.”
Irena Damsky, director of research at Palo Alto Networks, shared a similar story from years ago at a previous research job. After identifying a threat vector, her team reached out to several other vendors to combine threat data and develop visualizations of the threat around the world. Each vendor planned to publish their research in tandem to maximize attention, but thanks to time zone mix-up, her team’s research didn’t get published until later in the day. “Things happen,” she noted, but especially when it comes to major research, “automate processes whenever you can” to help make sure nothing gets left to chance.
Let it go
It’s easy to learn from our own failures, but what happens when someone else is to blame?
Shimony shared a story in which he had disclosed a vulnerability to a vendor, and the vendor had privately confirmed that the flaw would be published as a CVE. Instead, the bug was patched but never acknowledged — and the vendor stopped responding to his follow-up emails. “There are instances when vendors don’t play by the rules, or even pull the ‘legal card,’ when they shouldn’t,” he noted.
The panel agreed some of the best things to do in such circumstances are to continue following all responsible disclosure protocols, not get overly discouraged when others don’t play “fair,” make sure your research is written clearly and can be understood by non-technical audiences, and ensure that all content goes through rigorous reviews by marketing, legal and product teams before publication.
The panel also stressed the importance of cultivating environments in which employees and team members are comfortable admitting to their mistakes — and empowered to learn from them without fear of criticism or consequence.
Learn from the process
Thomas Edison once said, “I have not failed. I’ve just found 10,000 ways that won’t work.”
Failure can lead to ultimate success and discovery, but it’s human nature to want to keep those setbacks to yourself. But the panel’s ultimate message was to push past this because there’s much to be learned from the imperfect process itself. Damsky credited her high school math teacher for instilling this important lesson. “The final answer to the question is not the most important thing — it’s also about how you get there and what you learn along the way,” she said.