August 27, 2024

EP 60 – Going Viral: Security Insights from TikTok’s Former Global CSO

In this episode of the Trust Issues podcast, Roland Cloutier, who served as TikTok’s Global Chief Security Officer (CSO) from April 2020 to September 2022, joins host David Puner for a discussion that covers his extensive experience in the field of security. He previously held similar roles at ADP and EMC and is now a partner at the Business Protection Group.

Roland discusses his challenges in protecting sensitive data at TikTok, the social media platform with over 1 billion active users. He also talks about the complexities of ensuring data security and compliance. Roland emphasizes the importance of identity in modern security, explaining how privilege controls across the IT estate are crucial for protecting workforce users, third-party vendors, endpoints and machine identities.

Roland also highlights the need for a deep understanding of the business and its culture to implement security measures effectively. He shares insights into the role of identity in determining access to data and the importance of continuous controls assurance and validation. 

The episode provides a fascinating look into the security imperatives of a major social media platform and the measures taken to protect user data. Listeners will gain valuable insights into the strategies and principles Roland employed during his tenure at TikTok, as well as his broader views on security and privacy in the digital age.

David Puner: [00:00:00] You’re listening to the Trust Issues Podcast. I’m David Puner, a Senior Editorial Manager at CyberArk, the global leader in identity security.

Hello and welcome to Trust Issues. Today’s guest, Roland Cloutier, was TikTok’s Global Chief Security Officer from April 2020 to September 2022. Before TikTok, he was CSO at ADP for a decade and at EMC for several years. Now he’s a partner at the Business Protection Group. Whether you’re a fan of TikTok or not, it’s undeniable that the platform is an enormously popular social media giant with over 1 billion active users.

However, it’s faced significant controversy in several countries, including the U.S., India, and various European nations, primarily due to security and privacy concerns. In my conversation with Roland, who also has a background in law enforcement and the military, he discusses the challenges of protecting sensitive data, including where it lives, how it’s used, and compliance issues.

We also explore the role of identity in determining who has access to this data. Roland shares insights into the complexities and challenges he faced during his tenure at TikTok, emphasizing the importance of privilege controls across the IT estate to protect workforce users, third-party vendors, endpoints, and more.

While we touch upon the scrutiny TikTok has faced, our focus remains on the security strategies and principles Roland employed during his time there. This conversation provides a fascinating look into the security imperatives of a major social media platform and the measures to protect user data.

So, however you may feel about TikTok, it’s interesting to hear from someone who was at the forefront of its global security efforts. Here’s my conversation with Roland Cloutier.

David Puner: [00:03:00] Roland Cloutier, former Global Chief Security Officer at TikTok and ByteDance, ADP, EMC, and current partner and principal at the Business Protection Group. Welcome to Trust Issues.

Roland: [00:03:05] Thanks, David, great to be here.

David Puner: [00:03:07] Really excited to have you, and I should say that of all the guests we’ve had on this podcast thus far over the last couple of years, you are the one that my kids are most excited for, so thank you for that.

Roland: [00:03:18] I’m glad I can help entertain.

David Puner: [00:03:20] Absolutely. So let’s just dive right in. You served as TikTok’s Global Chief Security Officer for about two and a half years before your departure in September of 2022. Before that, you held the same role at ADP, or at least with the same title, for 10 years. And before that, at EMC. Your career, though, began in the military and then segued into law enforcement. How did your background in law enforcement and your military service prepare you for a career in security? And how do you continue to lean into that background today?

Roland: [00:04:00] Yeah, it’s crazy, right? Like, you don’t see a lot of chief security officers coming up, especially on the more technical side, from law enforcement or the military. But I’ll tell you what, I bucketed it into three categories that I think are important to me and important to how I lead and how I manage organizations that I’m fortunate enough to be in.

I think the first one is as a professional in security or enforcement. It gave me the basics of protecting things. I mean, think about it. You come out of high school, and you’re thrown into the military, and you’re in a very dedicated specialty around protecting people and things and national security. And they train you, right? So, you know, you get this foundation at a young age of how to protect things. And that really becomes a part of who you are and your persona and your professional life.

Roland: [00:05:00] I think the second thing that’s important, at least it was for me, is around self-discipline. The organizations that I belonged to, and the things that I did, and the things you had to put up with, and the sensitivity and criticality of what we were doing, and making sure, not that you just got up and went to work every day, but that you were accountable for yourself and the people that are around you. That self-discipline to go that extra step, to do the hard work, to ensure that you’re doing 100 percent of what you’re required to do every day is really important to me. And I’ve taken that throughout my career and still do today.

And I think the last area is about being a team sport. You know, when you start in the military, I was combat security police, anti-terror specialist, focused on nuclear security and air security. But I didn’t fly the planes. I didn’t load the aircraft. You know, I didn’t put the people on the aircraft. There were organizations and people and teams that made sure that the U.S. military and our allies got things done around the world.

Roland: [00:06:00] And I was a piece of that to ensure mission success and resilience. And so everything we do is a team sport in security risk and privacy enforcement and commercial today, because we’re not the business. We’re the people helping them get to market and do it securely and in a trusted way. So I think that team sport and understanding my job, my responsibility as part of that, was also instilled in me young, and I still carry that today.

David Puner: [00:06:33] Interesting. So then how did your previous chief security officer experiences at ADP and EMC help shape your approach to security at TikTok?

Roland: [00:06:42] Yeah, I think each job builds on each other. I would say try not to— I mean, you learn so much in each job. I think the three things that stick out to me best are understanding the business and customer first. Many of us are action-oriented. We love what we do. We know there are bad guys. We’ve got to go fix stuff. We want to see how people are breaking in. But I take a step back and I say, what is the business I am in, and who is the customer? Like when you think of TikTok, I’m sure you think of the customer being the end user, right? Watching the videos and ordering stuff off TikTok shop, like that’s a part of it.

Roland: [00:07:30] But don’t forget the entire other side of the business and advertising and marketing and the people that want to hire content creators to be their face. Like there’s a lot to a business like that. And how do you protect it if you don’t know the business and you don’t know who the customers are? So I like to spend time learning from, you know, like our people in global business services or in customer delivery or in media and marketing and understanding their perspectives and how my job, how they feel my job helps them get to market and be successful. So that’s number one.

Roland: [00:08:00] Number two is you got to know the culture. It’s not always an easy thing to do, especially when you’re in a multinational. And so that takes time. But starting at least with a view that I’m never going to know the entirety of the culture until I dive deep and being able to do that and accepting your role as a global leader and starting to sample the culture and understand it and get feedback. So knowing the culture is truly important.

And I think when you are in any company where you have a technical role, engineering alignment is absolutely critical. And I mean absolutely critical.

David Puner: [00:08:42] And so what do you mean by that?

Roland: [00:08:44] I think where a lot of organizations get in trouble is that they don’t have a mechanism by which they have a deep ingrainment into the development or engineering organization that is the product that is the business. And, you know, many of them say, well, segregation of duties. We have to have oversight. We support the SDLC or the by-design program. They have to do the work. We have to educate them, give them the tools. There’s, you know, a lot of people leave that level of segmentation.

But the problem is that if we’re not deeply aligned, we don’t align our tools and our technologies. We create these invisible boundaries that often reduce the effectiveness of servicing them as a customer, of creating capabilities across organizational lines to solve problems faster. So I have learned, especially in technical organizations, that engineering alignment is absolutely key.

David Puner: [00:09:37] Before that, when you were talking about knowing the culture, how does that play into when you’re going into a new organization and you’ve got to know the culture, you’ve got to, you know, you’ve got to figure it out, how does that coincide with CSOs and CSOs fostering cultures of security and transparency within their organizations? When do you start chipping away at that? If there are changes that you want to make, is that right away while you’re figuring out the overall culture, or is it something that you need to spend a little bit of time learning that overarching culture before you can start to chip away at the culture of security?

Roland: [00:10:14] Well, David, if it’s a black-and-white question, I think you have to wait a while because what comes first, the chicken or the egg, right? Well, transparency doesn’t come before you understand the culture, at least some level, to understand what type of transparency things you have to drive at. And matter of fact, I would separate the two. I think understanding the culture to make change is one thing. Transparency is both an internal and an external requirement as a chief security officer. And that can be done entirely in a different workstream, if you will.

I think getting to know the culture and how you approach change go hand in hand, right? If you’re going to make, or you need to make changes to an organization or to how you’re delivering security services or to how the business approaches the component of culture, you have to know the culture before you say, you know, listen, I want you guys to understand more and know more about security and whatever it may be.

Roland: [00:11:00] For instance, let’s just take TikTok. The average age of an average person at TikTok was literally 50 percent less than my age. I was 50.

David Puner: [00:11:07] Okay.

Roland: [00:11:09] And the average age was 25.

David Puner: [00:11:10] Uh-huh.

Roland: [00:11:12] Okay? So, you’re going into an organization that thinks different, that wants things different, and needs to understand. And, by the way, I have to give messages. I have to be present in their day-to-day work lives to ensure that I am leading them down the path. So, how did they do it? How did they want it? Well, they wanted TikToks. I did TikToks. They wanted deep technical analysis because it’s a technical organization. We gave them the deep technical analysis.

Every organization is different. You have to learn that. You have to learn what the people who are experiencing that work environment that are creating that work environment, what their needs and wants are, and being able to apply that culture of security in context to how they will accept it. And then you can move on to, okay, how are we going to be transparent? How are we going to deliver a view that they see, believe, and understand and do that externally as well? It’s always fun trying to figure that out.

David Puner: [00:12:00] So how is your TikTok content creation skill at this point? You got any kind of crazy dances you’ve coined or anything like that?

Roland: [00:12:05] Yeah, no, you’re not gonna see me doing any really crazy dances. That’s for sure. And I think I was fortunate enough to have people that would take the crazy stuff I did and turn it into a TikTok. And I think my TikTok creation days are probably over for now.

David Puner: [00:12:18] All right. Well, never say never, but you heard it here. Sticking with TikTok, given TikTok’s enormous user base, how did you address the challenges of securing identities and personal information on this enormous social platform?

Roland: [00:12:30] I think we have to take a step back and talk about identities because there are so many things associated with identities. If you’re talking about users and their identities, that’s one thing. If you’re talking about the integration of usage of the platform, being able to identify threats within the platform, that’s another. Identities, machine identities, device identities, all of these things together are components of a view of a threat-led defense program.

So you really have to be able to understand the components of identity. I mean, you know, think about it. Username and password are just one part of that, how they integrate with the application, what parts of the application stack they are trying to get to, what they are attempting to do. You know, you have to be at a level of view about the identities, machines, and applications. The relationship between the identity and the token is appropriate or not. And so there’s a lot there. So having a good, fixed, grounded capability in understanding the identities is number one.

Roland: [00:13:30] The second is understanding a broader intelligence aspect. So think about a focus on advanced intelligence platforms that enable analytical considerations for identities as groups. Things that we worry about are things like misinformation, disinformation, influence operations on the platform, criminal organizations creating inauthentic identities to do harm.

So we have to look at groups of groups and understand deep analytics into how they react together to be able to understand, is that a true identity or not? Is it a fictitious identity? So there’s two very distinct requirements. One is getting a handle on identities to give them enablement into the platform and assurance of a level of the device and what they’re asking for. And on the other hand, you have to be able to understand what is or what is not true identities and how identities work together to form either criminal elements or policy violative groups that you need to stop to make it a safe and trusted platform.

David Puner: [00:14:30] And so keeping it a safe and trusted platform while also keeping in mind user experience, especially on a platform that prioritizes user engagement. How do you balance that need for security with user experience?

Roland: [00:14:41] I think what we need to start with when we talk about user experience and the prioritization of certain security capabilities within the authentication environment is first, you have to start with the basic understanding that technology has to be usable. If you want people to use it, it has to be usable. Like you can’t expect people to have six RSA tokens and 14 applications to get into their email. The level of security isn’t appropriate for the risk.

Roland: [00:15:00] And that gets to my next point. You have to be able to define, articulate, and measure the risk to make it a business risk decision, right? It’s a business risk decision unless it’s a government requirement or it’s a regulatory requirement in some way for an agency or something else. These are all business risk decisions. And if you don’t do that due diligence or that detailed analysis behind it to prove or disprove what the perceived risk or threat is, then how can you even make that decision?

Next, I think it has to be in line with jurisdictional requirements and the industry as a whole. Meaning if you’re Google and Facebook, and Facebook is forced to do something way here when the rest of the industry is doing something here, that doesn’t make a lot of sense based on the type of information they hold or not. So it has to be in line with the industry in order for the product to be usable.

Roland: [00:16:00] I think whether internally or externally facing, we need to continue to adopt technology that reduces actions required by the users that simply they’re not going to adopt anyway. So how do we insert capabilities within the authentication transaction that does a better job of validation and assurance that the user is who the user is, they have the appropriate access necessary, and there isn’t any nefarious transaction being attempted at that time. So continue to look for that type of automation and that type of technology within that sphere will be better for all.

David Puner: [00:16:30] I want to ask you another question about ADP, where you spent 10 years as a CSO or as ADP’s CSO. How much did compliance shape your role there at ADP and what was your transition like when you moved from ADP to TikTok? How did those two CSO roles differ?

Roland: [00:16:48] So first, I think compliance at ADP, David, because we weren’t as regulated an entity as some other businesses and like financial services or critical infrastructure, it was a little bit different and it pushed me towards operational assurance and compliance validation. Meaning the most important thing at ADP within their compliance spectrum internally was around operational resiliency.

They knew that they had to pay one in six people on your street. They knew that they moved trillions of dollars a year. They knew that economies around the world, working economies, needed them up and operational. And in order to do that, there was a very strong focus on resiliency of infrastructure and capabilities.

Roland: [00:18:00] And so at ADP, I spent a lot of time going from, can I just get my SOC 2 done, or my ISO certification on a yearly basis, or could I push a button? And could I see any data center around the world and the current status of every critical control at that moment? And we got to there, we got to something that we called controls assurance. You can come at me any day of the week. You know, that was kind of our attitude. We were operationally ready, and our controls were validated. It just wasn’t a paper exercise for us.

And I take that into every other job I do and every organization that I have advised since. It is about continuous controls assurance. It’s not about compliance. If you can do continuous controls assurance and validation, you can be compliant, period, as long as you’ve structured the right controls for the type of business you’re in.

That probably answers the first part of your question. With regards to the second, I think the two positions were totally different. ADP had a culture of security as an integrated component of the business. People, process, technology, money movement. It was a broad spectrum security, even down to its customers, ensuring appropriate financial controls and validation and protection against criminal actors in their fraud programs.

Roland: [00:19:00] TikTok, I think, had a deep technical security, deeper than any organization I had ever seen. The security focus was embedded deep in the technology, meaning that although organizationally they understood the importance of compliance and they understood ensuring the protection of protected information, it was at every level and every component of the technology from the edge into how data was moved into how applications access data.

To how we validated access from human identities into data and all the way through like it was such a technical view. I really had to up my game a little bit on the technical side in comparison.

David Puner: [00:19:45] Yeah, I mean this is also at a global level that just seems staggering, the number of variables involved there.

Roland: [00:19:53] Yeah, from TikTok’s perspective, it was a technology specialty that had to be embedded at every level of the stack.

David Puner: [00:20:01] Going back to the content creation for a moment, today I saw a video that you put up on LinkedIn where you were talking about data defense and access assurance. What is it and how does identity figure into the equation?

Roland: [00:20:14] Okay, so what is it? Let’s talk about what data defense and access assurance is. So first and foremost, data defense is the ability to understand what data you have within your care, custody, and control as an organization. And I say it’s pretty simple. As long as you know what the data is, know where the data is, know where it came from, know who has access to it, know what you can do with it, and know where it went, right? That seems like six really simple things. It’s not.

Roland: [00:21:00] And the problem is, it’s a core requirement to most organizations from both a regulatory perspective. If you operate as a multinational, because there are so many jurisdictions, even if you’re only in the U.S. now, depending on the type of data you have and the data you hold, you might have 50 different jurisdictions. Plus Puerto Rico, I guess, that is requiring some sort of different legal structure for how you’re going to have care and custody for that. So it is paramount to any digitally enabled organization to understand the data within its care and custody and how you manage it.

Number one. Number two, if you’re going to use advanced data concepts, like, I don’t know, let’s see what’s a popular word these days, AI, right? AI is fundamentally built on data. So how can you have secure, trusted AI if you don’t have a handle on all the things I just said? What data do I have, where is it, who has access to it, where has it been, where is it going, what can I do with it? If you can’t do that, how do you know that your data is trusted?

Roland: [00:22:00] And so that’s what data defense and access assurance is all about, is creating a trusted capability to be able to use the informational assets within your care, custody, and control to be able to do great business things for your business, for your customers, and your shareholders. So that’s the why.

I think the identity aspects of it are super important because everybody focuses on, does Jimmy have access to that? Does employee so-and-so from that jurisdiction have access, that country, have access to it, right? Because of all the political stuff. Does machine X have access to that? Does token Y used for application A through R have access to that? Does microservice G have access to that data store, right? People tend to forget that the identities have to go much broader than just a human identity. It has to look at machine, it has to look at application, it has to look at tokenized use assurance and management. It has to look at all of those things to truly defend the data assets within your care and custody.

David Puner: [00:23:00] And machine identities vastly outnumber human identities and are continuing to just explosive growth.

Roland: [00:23:04] Wait, it’s something like 100 to 1. Yeah. It’s something like, or high 90s to 1, or something like that.

David Puner: [00:23:10] I’ve heard a little bit lower than that, but regardless, it’s explosive.

Roland: [00:23:12] Right. And so if you don’t focus on identity, how are you focusing on data?

David Puner: [00:23:16] That’s a great point. So yeah. I do want to get back to AI in a few minutes, but regarding these identities, and particularly machine identities, what are some of the challenges you’re seeing in the context of new data laws?

Roland: [00:23:27] Well, I mean, laws are absolute, right? You must, thou shalt. But the coding of machine identities and application identities were not. So proving validity of what a law, the applicability of a law to access is tough. You’ve really got to understand the law and understand your capability to enforce that.

Roland: [00:23:52] I think the second thing is manageability. There’s layers and layers and layers, right? Like I have an application that uses a data lake that does some data transformation that then brings it back to the primary application that delivers it to a user that asked for it. How many identities were just used, David? Five? Six?

David Puner: [00:24:08] Sounds right. Sounds good.

Roland: [00:24:10] In one transaction? How do you manage that? And how do you manage the transparency, assurance, etc.? Like, you know, when these things are so absolute and validation and verification come into play. So there’s learning to build better constructs on authentication and access to informational assets through applications and delivery of those, and being able to write appropriate logging pipelines that give you better capability to address these legal hurdles and these privacy data laws.

Roland: [00:24:52] And I think there’s multiple points of consideration that organizations have to look at. You know, a lot of people do token reuse and they use the same things in dev as they use in production. Like that stuff has to stop. Organizations have to have very specific points of manageability for token encryption and cross-application use and reduce their threat surface and their compliance profile by managing it specifically to an application. It may have a time of development and a time of construct a higher bar to meet to go to market and get a GA. But when it’s there, it’s going to be so much easier to comply and prove validation that you’re adhering to those compliance laws.

David Puner: [00:25:00] Taking it back to TikTok for a moment. TikTok has obviously been in the news a lot over the last couple of years. And we’re not here to pick the issues apart. That’s not what the focus of this particular podcast is, but how do you think TikTok and other organizations facing similar scrutiny or challenges can gain a reputation for privacy and security consciousness? In other words, what can organizations with less than positive security reputations do to turn things around?

Roland: [00:25:27] I think that’s every company. TikTok-specific, but I mean, it starts with showing the work. You know, it’s like our kids, our kids and their math problems today. You gotta show the work, and if you are incapable of doing that you’re gonna be incapable of defending what naysayers or political football armchair pontification people are saying. So show the work, number one. Number two, lead the market.

Roland: [00:25:40] Do industry-leading things in your market segment that show that you truly understand what security, risk, and privacy mean for the industry or the organization that you are in. Be out there in front of your peers, have those discussions, stand up and talk about the innovative work that you’re doing, which leads me to kind of my third point, be transparent. Transparency reports, I think, are the best thing since sliced bread. Show the world what you’re actually doing, what you’re stopping. That’s one way of doing it. The other is show your customers.

The last three organizations I was at, we built critical incident response centers or what are commonly referred to as fusion centers or SOCs. We built them in alignment with our executive briefing centers. If you’re coming in because you’re a customer, bring your security and risk and privacy leadership and have them come down into our security command centers and our critical incident response centers and watch the work happen, come in, see us do the work on a day-to-day basis, right? Let us prove it to you. So I think if you show the work, you lead the market through innovative capabilities, talk about it, and be out front with your peers, and you’re transparent with it, both through reporting, validation, certification, accreditation, as well as bringing people in to see the work. I think you’d get over that stuff, no problem.

David Puner: [00:26:30] As far as emerging threats go, obviously, you know, the threat landscape is insane and huge and emerging ever-evolving. What emerging threats concern you the most? And what can organizations do to prepare for them? I don’t want to get ahead of you, but I assume AI is involved in this mix. And if so, you know, that’s probably a good segue in our conversation about AI.

Roland: [00:26:50] Is that a thing? AI? Just kidding.

David Puner: [00:26:52] I’m more of a machine learning guy, but more folks seem to like the AI.

Roland: [00:27:00] Yeah, it’s important. I mean, till generative AI came out a couple of years ago, it was only larger, more technical organizations that were doing some form of AI or machine learning capabilities. Certainly, I’ve seen it for the last several years in the organizations I was in due to leaps and bounds made after the RPA phases and jumped into true ML to be able to do some really cool stuff, especially at ADP and TikTok.

But this bow wave has left organizations and practitioners a little dumbfounded on what to do, right? Regulators first came out and their entire focus on “AI security” was around bias defense. Sure, bias is important for a lot of different reasons, but that’s a quality issue, not a security issue. So ensuring the appropriate validation of AI, appropriate use, and a quality of that AI is critical. But at a security layer, there’s still multiple parts of the stack that need to be addressed. And it’s so nascent.

Roland: [00:28:00] Right. Like, think about it. We have the infrastructure, so it’s built on cloud predominantly. So we know how to protect the cloud. There are identities involved. We can take logging from the system, both from the base infrastructure, but as well as your AI engines that do certain things. So then we can provide that back to applications that are being built to do certain things. We’ve had organizations like NIST that have come up with real good risk frameworks based on the type of AI you’re using, the type of data you’re using, the type of customer you’re serving, what are real risk areas, so you can start to go look at those and draw areas around higher risk probabilities that you should go focus on. So you can do that.

Roland: [00:29:00] And you’ve had organizations like MITRE that have created the ATLAS threat-led defense program that looks at 12 parts of the technology and where they’re susceptible to threats and whether it’s insertion threat or any of the other areas, they give you a realistic view of what that is, technologies involved, and where there are KEVs or known exploitive vulnerabilities in those areas. And so you can do things, but are you doing things? You haven’t a dedicated AI program set up.

So the question is, you know, what can you do? Focus on creating an AI team. Like you created a cloud team when cloud wasn’t going to be a thing. And then the next thing you know, you’re taking entire data centers and smashing them over. So get to understand how to defend AI, what you can and can’t do, and then start instrumenting things based on a well-formulated risk principle. So that’s number one.

David Puner: [00:30:00] Are you seeing hesitation among organizations setting up dedicated AI teams?

Roland: [00:30:03] I’m just not seeing it as fast as the market is taking to it. I think, I’m not sure if it’s hesitation, not sure if it’s funding. We have seen a drop in security organizations’ funding in the last couple of years. That could be part of it. There’s a question of how much AI is an organization going to use? They’re still getting constructs set up. For appropriate use policies. What are organizations going to be doing? So who’s meaning that overall, I would call general term governance, right? So you don’t see a big migration towards setting up very specific teams unless you’re in big organizations that are already doing this.

So hopefully people will, and it becomes a part of their 2025 plan to either migrate existing resources, start training their people on it, but at least getting a hand on being very clear what they can and can’t do, and start focusing on the most riskiest areas. The second area is probably data defense, which we just talked about.

Roland: [00:31:00] I think the amount of the whiplash people are going to see from regulatory incidents and issues associated with access to data, inadvertent access, inadvertent release, with the multitude of new laws that have come out in the last three years is going to be concerning. So people have to get a very clear capability around understanding the data that’s in their environments, who can access it, where it’s managed, where it moves to, what AI is using it. And all of those sorts of things. So I see what we used to call a data security team, which typically focused on unstructured data and getting it all cleaned up and access to file shares, a much broader context of all the data within a business.

Roland: [00:32:00] The third out of four things I think I would tell people to focus on or start looking at are microservices. Organizations, and rightly so, created capabilities to be able to spin up technology services to support the applications they’re taking to market and be able to spin them up, spin them down in these clusters of microservices that allow them to scale back off and achieve not just scalability, but reasonable cost management.

But the problem is those microservices are all connected through microservices meshes and security isn’t instrumented in there. So microservices can talk to microservices and can take data from other microservices. And so all of a sudden you have like an automated shadow IT by accident by creating these microservices meshes.

If you were to ask a CISO, how many microservices do you have? And how are they connected? And what’s your transparency of visibility? And how many APIs are connected to those microservices? You’d probably get a pale-faced person looking back at you going, great question. You know, I got a life lesson in the last two or three years around microservices. And it is an important area for us, I think, to all understand.

Roland: [00:33:00] And the last area is authentication. I know we’ve been dealing with it for so long and for so many things, but our focus has been on the human identity. This actionability around machine identity, people identity, application identity, people have to solve in their environments and do it soon if they’re going to advance the digital changes that their company is trying to make.

David Puner: [00:33:30] So with all these changes that are going on, the evolution, the morphing, everything that is in some way accelerated by AI and machine learning. And in other ways, just the nature of how things are looking into the future. How do you see security teams morphing or evolving over the next few years? And how do you think we, as an industry, can plan for and develop the next generation of security practitioners?

Roland: [00:33:52] Well, first it’s going to be a massive leg up. I mean, thinking about what AI is going to give us at an immense data analysis and investigatory level, it’s going to create contexts and connections that we haven’t been able to do before way faster than humans have capabilities. So I can’t wait to see it.

Roland: [00:34:00] Let me give you an example of that. Google just proved in a new application that works around anti-money laundering, which is very hard to do, right? I mean, I think an average capability on validation of anti-money laundering in existing applications is sub 90%. And these banks and organizations have thousands of people doing it. They proved that they could do it through technology alone, using AI, with a greater than 98 percent validation in finding negative impact events.

I mean, like with no people, I mean, like that is huge. We’re going to stop bad things from happening at a faster and more prolific rate than we could, you know, with the thousands of people we’ve thrown at it. That’s a wonderful thing for the world. I also think it’s going to get us to automated defense. And it brings back the self-healing network days that people will laugh at.

Roland: [00:35:00] There is a reality that says if we can validate through the use of AI, certain constructs take application development, for example. If I’m using AI and all of a sudden AI, or I will say machine-developed code is going at a certain speed that I’ve never seen before, can humans really be able to get that through their SDLC in a reasonable way? Probably not.

But if I give a high and low construct within an AI-capable system for being able to look at that code and validate that, imagine the amount I can see, validate, and not even have to have a human look at it in that type of automated defense, let alone that should never happen. Stop that process. Reset it. Move it over here. Like RPAs on steroids for security. I only see a plus side when it comes to AI.

David Puner: [00:35:37] We’ve talked now about, from a defender’s standpoint, what AI can do. But then you take it to the adversarial side, to the threat actors. What are your concerns, or do you feel confident that we’re going to be able to keep up?

Roland: [00:35:48] Are we going to be able to keep up? I don’t know. I mean, you think of the average threat actor, if the AI reduces the level of competence that individual needs to have to play in the game, right. But it’s going to make us faster. Do I have a crystal ball? There are going to be a lot of new types of attacks we’re going to see because AI will be able to tie different threat patterns together to make new attacks that we’ve never seen before that weren’t capable. So, I don’t know.

Roland: [00:36:00] I think as long as there are bad guys and computers in the world, our jobs are always going to be safe because there’s always going to be things to do. Will they get ahead of us? I don’t know. I think the innovation and the human intellect and the people that want to defend, you know, the free societies of the world are going to be out there creating new technology that’ll stop the people that would do harm. So my glass is half full. I’ll put it that way. And I think there’s a lot for all of us to do.

David Puner: [00:36:39] All right, so obviously looking into the future and the crystal ball is something that I don’t know that I’ve talked to anybody and they’ve actually had the crystal ball. But now looking back a few years to 2015 when you wrote and published a book called Becoming a Global Chief Security Executive Officer. How is the path you mapped out in that book in 2015 different today, you know, nine years later, which is like lifetimes ago in, in, in this world?

Roland: [00:37:00] Listen, the book was based on a concept of organizational development and leadership. Skills necessary for chief security officers. What I would say is 90 percent of that book is still reasonable. It, especially from a converged security perspective, it talks about the umbrella capabilities of security risk and privacy programs as a single risk entity for global companies and how chief security officers need to understand the business and develop their leadership organizations and things of that nature.

Roland: [00:38:00] I think what’s changed is probably some of the constructs. Meaning that there are technology requirements that are slightly different today. There are larger focus areas on advanced technology concepts that I think I would add to a second version, if you will. And I would actually like to see a kind of a before Becoming a Global Chief Security Executive, talk about becoming a security practitioner. What are the building blocks required and necessary to become a good practitioner? And I think that’s really some of the stuff that I didn’t get at in the book, but I think would be great for this next series of cyberthreat defenders today.

David Puner: [00:38:39] So we’ve done the future, we’ve done the past, getting into today. In your current role as partner and principal at the Business Protection Group, what are you up to? And might we see you in another, uh, chief security officer role one of these days?

Roland: [00:39:00] None if my wife has anything to do about it. It’s been a great couple of years learning a new skill. When you’ve been in operations since you’re 17 and a half—from military to law enforcement and then into this career field—and you know, it’s 24 hours a day, seven days a week. It’s on planes every week. It’s traveling around the world like that. That is hard to decouple from who you are when you operate at that speed and at that level.

Roland: [00:40:00] It took me a little while to do that, but what I’ve found is that I get just as energized and jazzed about helping our peers in the field of security, risk, and privacy organization development, delivery, and capabilities. And so I find myself most of my work working for peers that want to develop a new security program they haven’t. Or develop a new capability within their team that they don’t have, or helping mentor younger upcoming executives into different roles and being able to do that without the operational responsibility is pretty cool. So I’m having fun doing that. I’m having, you know, in the areas that we’ve talked about around data defense and access assurance, especially in the DSPM market, that data security platform management.

I’ve been looking at companies in that space and really digging in AI defense. So I’m doing some work in the critical areas that I think our practitioners need over the next two to three years in helping companies achieve what we as practitioners need. And, um, yeah, other than that, getting a little bit more fishing in. So yeah, I don’t think you’re going to see me as a CSO anytime soon.

David Puner: [00:41:00] Yeah. You had mentioned, I think what the ashen-colored security practitioner earlier, or maybe it was a pale, pale or something like that. Nice to get outside. I get a little, little sun these days.

Roland: [00:41:02] Very true.

David Puner: [00:41:05] And where you, I think you’re, you’re pretty close to me today, right? Are you in Boston today?

Roland: [00:41:06] Yeah, I’m up north. I’m in New Hampshire, in the mountains of New Hampshire, enjoying the beautiful, clear, fresh woods up here that you don’t often get down in Florida.

David Puner: [00:41:10] No, no, fantastic. We love the great state of New Hampshire, just north of, of our headquarters here in Newton. I’m, I’m sitting nearby and maybe I’ll get in the car after this and meet up with you and do some fishing.

Roland: [00:41:20] Sounds good, David.

David Puner: [00:41:23] Roland, thanks so much for your time today. Really appreciate it. We’d love to have you come back on one of these days so we can go even deeper, but fantastic to have you on the podcast. Thanks so much for coming on.

Roland: [00:41:30] Hey, I love having a podcast that dives deep into these. These are real issues, and I’m glad you’re bringing them up and hopefully somehow we’ve helped someone else think about these things a little bit differently, and we’re going to make their day just a little bit better. Thanks for having me, David.

David Puner: [00:41:40] Thanks for listening to Trust Issues. If you like this episode, please check out our back catalog for more conversations with cyber defenders and protectors, and don’t miss new episodes. Make sure you’re following us wherever you get your podcasts and let’s see. Oh yeah. Drop us a line if you feel so inclined, questions, comments, suggestions, which come to think of it are kind of like comments. Our email address is trustissues, all one word, at cyberark.com. See you next time.