November 21, 2024
EP 66 – Post-Election Insights: AI, Misinformation and Security
In this episode of Trust Issues, host David Puner interviews James Imanian, Senior Director of the U.S. Federal Technology Office at CyberArk. They discuss the critical topic of election security, focusing on the recent 2024 U.S. presidential election. Drawing from his extensive background in cybersecurity including a career in the Navy and a stint at the U.S. Department of Homeland Security, James brings a wealth of experience to the conversation, which explores AI’s impact on election security—highlighting how AI has transformed the landscape by increasing the scale, speed and sophistication of misinformation and disinformation campaigns. James explains the differences between misinformation, disinformation and malinformation and their roles in the information environment surrounding elections.
He also highlights the importance of public-private partnerships in securing election infrastructure and the role of international collaboration in countering nation-state threats. The episode examines the challenges of maintaining trust in the digital age and the potential of identity verification technologies to enhance information trustworthiness.
Finally, the discussion touches on the parallels between election security and enterprise cybersecurity, emphasizing the need for critical thinking and proactive measures to uphold the integrity of both elections and organizational security.
For more insights from James Imanian on election security, check out his blog, „Six Key Measures for Upholding Election Security and Integrity.“
Today’s episode is a cyber post-mortem on the U.S. presidential election set against the backdrop of lots of other 2024 elections worldwide. While many of us may be experiencing election fatigue, the insights from this discussion remain crucial. Our guest today is James Amanian, CyberArk Senior Director of the U.S. Federal Technology Office. James shares his impressive path from the Navy to CyberArk, which included a stint with the U.S. Department of Homeland Security. We discuss AI’s impact on election security, [00:01:00] focusing on how AI has transformed the threat landscape by increasing the scale, speed, and sophistication of misinformation, disinformation, and fraud.
James also highlights the importance of public-private partnerships in securing election infrastructure and the role of international collaboration in countering nation-state threats. We look at the challenges of maintaining trust in the digital age and explore the potential of identity verification technologies to enhance information trustworthiness. We also look at the future threat landscape for elections and discuss the ramifications for enterprise organizations, including the security lessons they can learn and the proactive actions they can take to protect themselves. Here’s my conversation with James Amanian.
David Puner [00:02:00] James Amanian, Senior Director of the U.S. Federal Technology Office at CyberArk. Welcome to the podcast.
James Amanian [00:02:15] Yeah. Thanks for having me, David. Looking forward to the conversation.
David Puner [00:02:20] Absolutely. Me too. And I guess before we get into the subject of today’s conversation, which is election security, of course, we just went through a big one here in the U.S. At the time of this recording, it was about a week ago. You’ve been with CyberArk since May of 2023, and this is your first time on the podcast. So, what does your role as Senior Director of the U.S. Federal Technology Office at CyberArk entail? And briefly, what’s been the career path that brought you here?
James Amanian [00:03:00] Yeah, I’ll talk about the career path first, which is what brought me here to a very exciting role at CyberArk. I’ve always had a computer since grade school—a VIC-20 for those of us who may be old enough for that. Then I actually bought the original Macintosh when I was in high school, then went through the Naval Academy and went into naval aviation, but was always a computer guy. Eventually did get a computer science graduate degree while in the Navy.
In the Navy, I served in various roles—what we called information assurance, what we now call cybersecurity. So I did that with the Joint Staff. I did that with the Navy Staff. I was essentially a CIO for a carrier strike group. And then I was also at Fort Meade doing Fort Meade things in computer network operations. My last active duty assignment was actually with the Department of Homeland Security at the National Cybersecurity Communications and Integration Center, the NCCIC.
After I transitioned out, I helped the financial sector with cybersecurity and cyber resilience, helped the Air Force stand up their Chief Information Security Office, helped the State Department stand up their cybersecurity risk [00:04:00] program, and with Guidehouse—which spun out of PwC at the time—I was the first Chief Information Security Officer for Guidehouse. I left Guidehouse, went back into the government as the Deputy CIO for the F-35 Joint Fighter Program, then left there, went to SAIC, and stood up their Cyber Threat Intelligence Integration Center. From there, I came over to CyberArk as the first U.S. Federal Technology Lead here.
What does that mean? It means I get to talk to the CISOs and the CIOs in the federal and public sectors. I also help our public sector team across state and local governments understand their needs. I help the account executives understand what the CIOs and CISOs are going through, what their challenges are. I listen to our customers and see how CyberArk can meet those needs [00:05:00]—either now or through future development—so we can continue being on the cutting edge of identity security.
David Puner [00:05:15] Thank you for taking us through that, James. One of the things that’s so cool about working in this space is working with people who come from everywhere and anywhere. It’s really, really exciting and interesting in that way. One quick question about all that you just went through: Do you still have that first Macintosh?
James Amanian [00:05:35] No, it’s worth a lot. I’ve got the instruction manual in my bookcase, but I think I lost the Macintosh along the moves during my Navy transitions.
David Puner [00:05:45] Should have held on to it.
James Amanian [00:05:47] Absolutely.
David Puner [00:05:50] To get into the heart of this conversation—and we want to do a bit of a disclaimer here in that we know that everybody has election fatigue—this is a topic that is not only relevant a week ago, a month ago, or two years ago. It’s super relevant now in [00:06:00] that there are going to be more elections, and a lot of this stuff is relevant to organizations in ways that we will get to in this conversation. In a blog that you wrote before the 2024 U.S. presidential election, you mentioned that it would be the first true AI election. You highlighted the challenges posed by AI in spearheading misinformation, disinformation, and malinformation campaigns, and we’ll get into the difference between those terms in a little bit. In your perspective, did the election live up to that billing? What did we see in this election with disinformation and influence operations? And do we expect to learn more as time goes on?
James Amanian [00:07:00] The short answer is we did see what we were expecting to see—what I talked about in the article and what many others had mentioned in their perspectives, both from the federal government and the private sector—and we will continue to learn more. And while I agree that many of us may have some election fatigue, I think a lot of the issues that we’re going to be talking about today are types of initiatives and programs that we need to start today, or enhance today, in order to prepare for the midterms, prepare for four years from now, the next U.S. presidential election, and other elections around the world, which we’ll talk about as well.
David Puner [00:08:00] How has AI changed the election security landscape, just to start things off?
James Amanian [00:08:10] Like any tool—especially a technology tool—AI has helped with scale, speed, and reducing the technical sophistication needed by an adversary to achieve effects. What do I mean by that? Humans have been trying to influence other humans‘ decisions for millennia. Technology, from the printing press to social media in the 2000s, has advanced that capability. With technology, we’ve had the ability for one individual or a small group of individuals to influence a lot of people—that’s the scale. The speed comes from this 24-hour news cycle that we’ve been in for many decades. AI has been able to create the content to feed that 24-hour news cycle, which influences people faster.
The technical sophistication—what we’ve seen over the past two years, especially with generative AI like ChatGPT—means you no longer need a computer science degree to create tools, platforms, and campaigns. AI lets you do that again, at a speed and scale that we haven’t seen before.
David Puner [00:09:30] How convincing is AI-generated content? We’re talking about anything from text-based posts to images and videos. How convincing is it, and how much more sophisticated has it gotten in the last couple of years?
James Amanian [00:09:45] Especially around the text, it’s very convincing. In computer science, there’s this concept called the Turing test.
David Puner [00:09:50] Okay, I don’t think I’ve heard that term before.
James Amanian [00:09:55] Alan Turing—you may know him from the movie The Imitation Game—was a British computer scientist who helped break the Enigma code. He’s one of the great computer scientists of our time. He thought about AI in the 1950s, maybe even before we had the term AI. He proposed the Turing test, which asks: When can a computer be mistaken for a human? You put a computer behind a terminal and a human behind another terminal, and they text back and forth. If you can’t tell whether you’re talking to a computer or a human, the computer passes the test.
We pass the Turing test today. There’s almost no distinction. You could put almost anyone in a room, give them a tweet generated by AI and another one generated by a person, and they wouldn’t be able to tell the difference. Now, with generative AI, we can generate images, audio, and video. With images and videos, they are easier to detect as fake, but they’re getting better.
AI is part of what we call exponential technologies—we’re just at the beginning of a curve that’s about to take off. Despite election fatigue, today we need to start preparing for what AI will be capable of in two or four years—like creating videos that are indistinguishable from those produced by legitimate news sources. What happens when a reputable news source has to clarify that it wasn’t their crew that shot a certain video? That’s what we’re going to be dealing with.
David Puner [00:11:30] The possibilities for where this could go seem somewhat endless and multi-dimensional—maybe even a dimension we can’t fathom yet. It may go without saying, but the security of election systems—from voter registration databases to voting machines—is crucial. What are the most significant vulnerabilities in election infrastructure? What do we know about how they played a role in the 2024 U.S. presidential election, and how can they be addressed to ensure the integrity of future elections?
James Amanian [00:12:10] And here’s a good news story: Back in 2017, the election system was declared critical infrastructure in the United States, and I think other countries have since taken similar measures. I want to separate this into two layers. There’s the technology infrastructure layer of election systems—the voting machines, the tabulations, and so on. Then there’s the voters themselves, which I’ll call the information environment.
The infrastructure layer—from 2017 to now—has seen a lot of public-private partnerships. The U.S. federal government, voting machine vendors, and local counties using or upgrading these voting machines have been working together. While vulnerabilities do exist, they are well understood, and there are mitigations in place. For instance, many counties use paper ballots as backups, so if tabulations go wrong, there is always a paper trail. These machines don’t touch the internet, but they still need software updates, which must be done carefully. Procedures must ensure that only validated USB sticks are used.
On the other hand, there’s the information environment, which is broader than the election infrastructure itself. Here, misinformation, disinformation, and malinformation come into play.
David Puner [00:14:00] Misinformation, disinformation, and malinformation all play a role in this information environment.
James Amanian [00:14:10] Exactly. In the Department of Defense, we used to call it the information environment. It encompasses technology, adversaries, allies—essentially everything surrounding an operation. So, misinformation, disinformation, and malinformation—for those not deeply familiar—are different. Misinformation is when I tell you something that I remember incorrectly without intending harm. Disinformation is when I intentionally concoct a story or twist the truth to mislead. Malinformation takes a bit of truth but presents it in a way that’s beneficial to the speaker. All of these are present in the information environment.
David Puner [00:15:00] In your pre-election blog, which was written for the CyberArk blog—quick shoutout there—you also discuss intensifying efforts by nation-state actors to undermine confidence in democratic institutions. What are some strategies to counter these threats, and how can international collaboration play a role?
James Amanian [00:15:30] The first steps have already been taken. All democratic countries need to view election security as critical infrastructure, or use whatever term they prefer. The basics of cybersecurity come into play—cyber resilience, knowing who you are, what you’re trying to accomplish, your adversary’s intent and capabilities, and how you’re going to fight through challenges.
When it comes to election security, partnerships are crucial—whether it’s public-private partnerships or partnerships within a society. We need education and „prebunking“ efforts to prepare people. Since 2017, influence operations have become part of our public discourse. The U.S., specifically CISA and the FBI, has called out Russian and Iranian influence operations, especially around the 2024 elections. Even on election day, Russian actions were called out. Globally, half of the world’s population was eligible to vote this year, and I have no doubt U.S. officials were in contact with counterparts in India, France, and England to share insights on what adversaries were doing.
David Puner [00:17:00] More than 50 countries held elections this year. Are other democracies facing similar cyber threats as the U.S.? How does it differ, and what can we learn from other countries‘ experiences?
James Amanian [00:17:30] Yes, other democracies are facing similar threats, and we’ve been learning from each other. We have traditional relationships—for example, with England, France, and the European Union—and as these actors (often Russia or Iran) launch influence operations, we share intelligence and lessons learned. We’ve also seen joint actions to take down „troll farms,“ which are used to influence elections. AI’s capabilities are allowing adversaries to execute these operations more effectively.
David Puner [00:18:30] Trust plays a big role in all of this, and you’ve talked about the weaponization of trust. How does identity and trust factor into election security? How can identity verification technologies like digital wallets enhance trustworthiness and reduce the impact of misinformation?
James Amanian [00:19:00] This is a trust issue, right? Let me tie together a few of the themes we’ve discussed. First, if you’re consuming information, you need to know where it’s coming from. How do you verify that the news source you’re reading is reputable? Websites can use signed certificates to prove their authenticity, but social media platforms don’t have similar safeguards. We need better ways to verify identities on these platforms.
Second, individuals need to practice critical thinking—consider whether a source is reputable or consistent with their previous positions. Personally, I use a news service that aggregates multiple sources and helps me see where each story is coming from—whether it’s a center, far-right, or far-left perspective. Technologies like digital wallets can help people prove their identity when voting or posting information, and we’ll need to discuss how these tools are implemented.
David Puner [00:20:30] You mention critical thinking. In an age where „facts“ seem to come and go in milliseconds, and people don’t necessarily have the bandwidth to dig deeper, how can critical thinking be promoted?
James Amanian [00:21:00] That’s a tough philosophical question, but one answer might be civics. We need to treat each other civilly and engage in respectful discussions. When it comes to social media, we need to verify information before reposting it or sharing our opinion. One of the challenges is the rapid reposting capability, which often spreads misinformation without verification. Platforms should aim to prevent misinformation from going viral.
David Puner [00:22:00] What are some proactive cybersecurity measures for upholding election security and integrity, beyond critical thinking? Which measures are most critical for future elections?
James Amanian [00:22:30] In cybersecurity, we talk about CIA—confidentiality, integrity, and availability. For election security, integrity and availability are key. Integrity involves ensuring that the code on voting machines is what it’s supposed to be. Governments also need to be transparent about their policies and actions to support public trust. Availability means that voting systems must be accessible—people need to be able to vote without interference. During the U.S. elections, we saw false fire alarms that disrupted voting, but measures were taken to extend voting hours and ensure availability.
David Puner [00:23:45] How can the lessons from securing elections be applied to enterprise organizational cybersecurity and identity security? What parallels can be drawn between election security and enterprise security practices?
James Amanian [00:24:00] There are many parallels. In both cases, you need to understand your mission, your applications, and who’s accessing those applications. You also need to understand the identities of those accessing systems, whether they’re humans or machines. Foundational cybersecurity means understanding your mission, understanding the threat against it, and having a plan to detect, respond, and recover from attacks.
David Puner [00:25:30] Fast forward to the U.S. midterm elections two years from now, and then the next presidential election in 2028. What emerging threats might we be grappling with then? More advances in AI, or something else?
James Amanian [00:26:00] Two things stand out. First, AI’s ability to generate videos and images that are indistinguishable from real ones will be a challenge. We’ll need to find ways to verify whether content is real or not. The second issue is the ability for AI agents to form individual relationships with people. Imagine an AI or an adversary targeting 10,000 people in Wisconsin and building relationships with them over six months—without them knowing it’s an AI. This kind of targeted manipulation is what I’m concerned about.
David Puner [00:27:30] Are there teams working on defense against this right now?
James Amanian [00:27:40] I’ve heard concerns about this from different organizations. Recently, there was a case involving an AI where a minor developed a relationship with it, and that led to tragic consequences. So, this isn’t theoretical—it’s real, and it’s something we need to address as democratic societies.
David Puner [00:28:30] What is „pig butchering,“ and what does it have to do with election fraud?
James Amanian [00:28:45] Pig butchering is a term used in the cybersecurity space for a type of fraud where an adversary „grooms“ a victim—often over social platforms—to siphon money. The term comes from „fattening up“ the pig before butchering it. AI’s capabilities allow adversaries to collect information about individuals and target them more effectively, making these operations faster and more scalable.
David Puner [00:30:00] What should we be questioning when it comes to facts or even an email that we receive? Is it just about the basics of cybersecurity?
James Amanian [00:30:20] We live in interesting times. The ways we establish relationships and trust haven’t really changed over thousands of years. But technology has allowed us to interact with people we don’t really know. We need to get back to establishing trust in more traditional ways—meeting in person or having face-to-face conversations—because establishing trust online is challenging.
David Puner [00:31:30] We often conduct interviews like this over video platforms, which is convenient. Is it going to get to the point where we’ll need to do all interviews in person to avoid fabricated personas?
James Amanian [00:31:50] No, because we do have technologies that can verify identities. For example, if you’re going to make an investment decision, you’d want me to use a smart card or something cryptographically bound to my identity. In the future, video platforms might have these verification methods. For something casual—like a pickleball discussion—maybe a lower level of assurance is enough.
David Puner [00:33:00] James, thanks so much for joining the podcast. Really appreciate it—this is fascinating stuff. I hope we’ll be talking to you again soon. Let’s get out there and do some critical thinking.
James Amanian [00:33:15] Thanks for having me, David. I hope this starts some critical conversations, because we need to have these discussions in our small trust circles and build them out.
David Puner [00:33:30] Thanks for listening to Trust Issues. If you liked this episode, please check out our back catalog for more conversations with cyber defenders and protectors. Don’t miss new episodes—make sure you’re following us wherever you get your podcasts. And drop us a line if you feel so inclined—questions, comments, suggestions (which are kind of like comments). Our email address is trustissues, all one word, at cyberark.com. See you next time.