6월 25, 2024
EP 55 – AI Insights: Shaping the Future of IAM
In this episode of Trust Issues, Daniel Schwartzer, CyberArk’s Chief Product Technologist and leader of the company’s Artificial Intelligence (AI) Center of Excellence, joins host David Puner for a conversation that explores AI’s transformative impact on identity and access management (IAM). Schwartzer discusses how CyberArk’s AI Center of Excellence is equipping the R&D team to innovate continuously and stay ahead of AI-enabled threats. Learn about the future of AI in IAM, the role of AI in shaping new business models and the importance of an experimentation culture in driving user experience (UX) improvements. Gain insights into the methodical, data-driven approaches to monetization strategies and the significance of learning from on-the-job experiences. This episode is a must-listen for anyone interested in the intersection of AI and IAM, and the opportunities it presents for leading the transition in the industry. Tune in to uncover what’s coming down the AI pike and how it will influence the future of IAM.
For more from Daniel on this subject, check out his recent blog, “Predicting the Future of AI in Identity and Access Management.”
David: [00:02:00] Daniel Schwartzer, CyberArk’s chief product technologist and the leader of CyberArk’s artificial intelligence center of excellence. Welcome to Trust Issues.
Daniel: [00:02:09] Thank you so much, David. It’s a mouthful, right? It’s a lot of, uh, a lot of words for my title.
David: [00:02:15] Yeah, yeah, it’s not exactly Bond, James Bond, but it’s kind of like, uh, has the same meaning in more words, right?
Daniel: [00:02:22] So, sort of James Bond, yeah.
David: [00:02:25] Right. So today we’re gonna talk about some artificial intelligence predictions you recently put out in a CyberArk blog titled Predicting the Future of AI in Identity and Access Management, otherwise known in the business as IAM. But before we get into that, let’s give the audience an idea for your AI street cred, ’cause you have some pretty significant AI street cred. What do you do as CyberArk’s chief product technologist and what’s been the path that’s led you to this role?
Daniel: [00:02:55] I’ve been in the high tech industry for over 20 years. I would actually dare to say 25, but I will stop at 20. I think that’s enough. And I think I’ve been through all of the technical roles and many of the managerial roles in the R&D organizations. I’ve started as a developer and then I rose to the team lead and group manager. Eventually becoming a VP R&D at a startup and finally realizing that what I’m enjoying the most is the actual impact at the organization and having larger organization under me would actually hinder my desire to stay relevant and stay technical. So in that sense, the idea of becoming a chief technologist, somebody who is driving the technology roadmap of the organization is actually a sort of a dream come true.
David: [00:03:47] So outside of your role, just as like in your day to day, because this is what you do, and obviously it’s a pretty sizable role. Are there any things that you do every morning to make sure that you’re up to speed on the latest, greatest and what’s going on in this world? What are things that you sort of dig into outside of work to make sure that you’re as up to speed and on top of these things?
Daniel: [00:04:10] One of the main things I need to be doing all the time is to keep being up to date. I’m reading a lot. I’m listening to many podcasts and watching a lot of videos, lots of training, almost all the time. I’m in the middle of one or two or three Udemy trainings in some technical aspect or some technical framework, just to keep up to date and make sure that I’m on top of things. And things are getting faster.
David: [00:04:40] They sure are. And I gotta say, from a personal standpoint, it’s great to be interacting with you here, live, in a podcast format. You and I have spent a lot of time in live docs, all in the spirit of blogging and the written word, but it’s great to actually be here, talking to you live.
Daniel: [00:04:58] Yeah. It definitely is exciting. For quite a long time, I think for over three years, we’ve been publishing our Medium blog. It’s called CyberArk Engineering, where our engineers are publishing technical articles related to our day to day work. So whoever’s interested can go there. And check it out. It’s now coming out, I think, weekly. It used to be bi-weekly, and now it’s coming out almost weekly. So it’s a very good output for technical folks.
David: [00:05:29] And now that I’m not involved in the process anymore, it’s coming out faster, isn’t it?
Daniel: [00:05:33] It was definitely one of those things that helped us. So I’ll point out, that is an unofficial CyberArk blog, and it’s definitely worth checking out on Medium. The blog we were referencing earlier is on CyberArk’s blog, and you can find that on cyberark.com, of course. And in addition to being CyberArk’s chief product technologist, as we already alluded to at the beginning here, last fall you added another responsibility when you took on the role of head of CyberArk’s Artificial Intelligence Center of Excellence, which launched in September 2023. So I guess the obvious question to follow that up would be, what is the CyberArk AI Center of Excellence and what does being the leader of it entail?
Daniel: [00:06:17] AI has been around for a long time and I think ChatGPT, when it was released in, I think, November 2022, really brought AI and specifically generative AI to the surface and no company could ignore this any longer. From the infancy, this technology has become sort of an expectation that every company would have capabilities based on the generative AI. And that’s when CyberArk realized we needed to make a move in this direction as well. We’ve had a lot of ML initiatives going on in the background, but we wanted to combine all of them under the same roof, under the same management. Making sure that all of these initiatives are aligned in the general direction that we want to lead with. And that’s when the AI Center of Excellence was announced. I personally have been in the data and AI/ML world for a while. I’ve led a big data team or big data group at one of the largest companies I worked at. It was, I think, 10 years ago. And again, those technologies were in their infancy yet. We used Hadoop and all of those big data tools. A few years later, like if we come to the current time, the technology stack has evolved significantly. And now we use a sort of a mix between the generative AI capabilities that were just recently announced and the classical AI and ML capabilities that were there for a long time.
David: [00:07:38] So you mentioned the release of ChatGPT that reverberated around the world in late 2022, and obviously that was a watershed moment for everyone. For you, you had been immersed in AI before that, so how did that release affect you, what was surprising about it, and when the CyberArk AI COE launched in September 2023. What did that ChatGPT release have to do with everything that has transpired with you and your world at CyberArk since then?
Daniel: [00:08:08] The release of ChatGPT blew away a lot of people, including myself, and I think most technologists around the world. So it was not an event that only blew out people who were not heavily involved with technology. It reinforced several understandings, at least for me. One very interesting pattern that we realize here is that after ChatGPT was released almost immediately, the expectation of maturity of this capability was born. So let’s say one month after ChatGPT was released, there was an expectation that all companies would be able to have their own chatbots based on that technology. And there was some kind of expectation that these chatbots would also be production-grade and highly available and scalable and low latency, etc. So this move from a technology genesis to the commodity or an expectation of commodity was so quick that I don’t think anybody has seen anything similar in the technology industry before. So this was like one of my understandings here and it is really problematic and really hard to cope with that expectation. Because when we need to develop, for example, a chatbot, right? Everybody needs to have a chatbot. So for example, when we are developing our chatbot, there is a wide expectation that this chatbot should be really, really easy to develop and it should be production-grade from day one. But the reality is more complex than that. The technology is not really mature. There are a lot of potential hiccups and potential problems that everybody needs to face, but the expectation is really high in this sense. So that’s like one insight that I have on the GenAI. The second insight here is that the technology evolves in an exponential manner. So there is a lot of movement. ChatGPT was not born, or GPT was not born in one second. It was not born in November 2022. So, uh, There were several versions before GPT-3 was actually released. Version 2 was released the summer before, and I saw demos and it looked impressive. But nothing really happened on the surface. Nobody besides the very, very oriented technology geeks knew about this at all. So there are several ideas to take out from here. One, there is a lot of things going beneath the surface and some sort of, um, packaging or marketing or messaging that eventually makes it through and then everybody is completely blown away. And second, the technology is evolving in an exponential manner. So it seems that nothing is happening and then all of a sudden it seems that everything is happening. And maybe the last insight here is that once OpenAI released ChatGPT, it seemed like all the companies all of a sudden had these models coming up, and everybody has some sort of large language models. Today, there are hundreds, if not thousands, if not tens of thousands, of different standalone large language models, and it’s only two years after GPT was really out of the closet. It reminds me of sports events when there was a notion that a runner, for example, cannot run a marathon below, let’s say, three hours. And then once somebody broke this barrier of speed, everybody was breaking this barrier of speed all of a sudden. So it became available to everybody. So this is something that I see happening with large language models.
David: [00:11:09] So are you saying that large language models are kind of like a performing enhancing drug?
Daniel: [00:11:13] No, it came from a different way. It was not like performance enhancing drug. But there was this invisible glass ceiling that you could not run through until somebody broke that ceiling. And then when OpenAI broke that ceiling, everybody was following. And it seemed like everybody was on the verge of doing that. Right. But I think it’s not true. Very few companies were thinking about creating, building, and training their own large language models until that ceiling was broken.
David: [00:11:45] So, from your seat then, of course, in a cybersecurity company, and you’re our chief product technologist, how much in that moment, in that first wave, are you thinking, “Oh, we got to defend against AI-enabled threats.”
Daniel: [00:11:57] Yeah. AI-enabled threats is a real thing. There are new attack surfaces, new threats around, especially around the social engineering areas, deep fakes of different kinds, voice, video, etc., texts, of course, auto-generated messages by all different transit methods. So all of that became much more available in the hands of a hacker. Though, that the defense part is not necessarily symmetrical to the offensive part.
David: [00:12:23] How so?
Daniel: [00:12:25] By that I mean, it is not necessarily that the AI-generated threat is necessarily handled using AI-generated defense mechanism. So these two areas are living sort of separately, they are feeding one to another, but they’re not necessarily equal, they’re not necessarily parallel. So, we can handle, organizations can handle and overcome and potentially beat AI-generated threats using classical methods, or they can use AI to defend classical attacks. So these are not necessarily symmetrical. So that’s what I mean.
David: [00:13:00] So instead of, it’s not a changing of an approach, it’s more of a transformation of using AI and ML to transform identity security approaches.
Daniel: [00:13:09] Right. Identity security approaches need to be adopted to both the attacker perspective to the new threat landscape that is born due to AI. And again, social engineering was just one part of that, but there are different attack vectors. For example, attacking the LLM models themselves, trying to feed them false information or trying to break into them using different prompt engineering techniques, and there is also an opportunity in leveraging AI-based defense mechanisms. Okay. So in that sense, IAM, and in general identity landscape, should be changing based on that. Should take into account these ongoing technology changes and evolutions.
David: [00:13:59] So that said, what’s been the transformative impact of AI in the domain of identity and access management so far?
Daniel: [00:14:06] I think from my experience or from some of the things that we have seen in the past little bit over half a year of working in the Center of Excellence, there are several vectors of development, several buckets, if you wish, of activities which all help drive this AI capabilities further to the customers, to our customers. Some of the main pillars or some of the main buckets of activities that we can see leveraging AI and ML in the IAM world would probably be around automation, around the clustering activities, be it anomaly detection or be it similarity detection. It would be around automation in general. And I think an additional area where generative AI and large language models brought into the wide audience was around analyzing large pieces of text of different types of text or different types of input. It can be also video input, for example, and summarizing those and drawing some sort of conclusions based on what was written in a very similar manner. How you can take a long article and ask ChatGPT to summarize this in a few bullets. In a similar manner, we can leverage the same capability to summarize users’ activity in different systems. So this is one very interesting area that we have been exploring here at CyberArk. Obviously, natural language in general has been brought to the surface and brought to the availability of a wide audience now. And I’m not only talking about regular chatbots. Where a user can ask a question and get the responses from the documentation, from knowledge base or potentially using an assistant to execute commands on his behalf. But it is also to create higher level of, for example, policy or higher level of communication between a user and the system. Where you would not necessarily be required to have highly technical skill in defining a question or in building a query or in creating a policy all of those can be eventually, and we are right at the first steps of these developments, it can be brought to natural language.
David: [00:16:08] At the CyberArk AI Center of Excellence, which you head up, who has a seat at the table? Obviously, from your side, it’s R&D. Who else has a seat at the table? And what does a typical gathering of the team look like? What are you discussing? Are you discussing what we need to develop? Are you discussing what’s already come? Are you discussing threats? What’s on the agenda?
Daniel: [00:16:29] Yes, everything. Basically, as the Center of Excellence, we need to take care of several things. First of all, we need to be outward-looking and learning the landscape, learning the evolving landscape, what’s happening with other companies, what’s happening with the technology and try and utilize these technologies to bring AI-related capabilities to CyberArk’s product. So this is an sort of external forward-looking work that we’ve been doing since the establishment of the ICOE. But beyond that, we are also required to bring the large R&D and product organization on board. My team is relatively small team. We are nowhere close to the size of the R&D and we are not targeting to compete with R&D in size, scale, stability, and everything else. So in this sense, it’s not enough that my team knows how to do the work. It is super important to make sure that the large CyberArk R&D is capable of doing the work without our assistance and without our escort. So a significant part of our work is to ensure that we manage to hand over knowledge and parts of projects and ideas and goals and technical capabilities to the R&D teams. So whenever we look at a specific initiative, a potential initiative, we also need to take into a significant amount, whether the R&D team would be able to take this on themselves in the near future, so we’re not just creating innovation for the sake of innovation without having a future owner.
David: [00:17:59] So, as we speak now, we’re in the last week of May. And last week was our big Impact 2024 conference in Nashville, where there was an announcement that obviously involved some of the work that your team has done. What is CyberAkora AI and what was your team’s influence/involvement with that?
Daniel: [00:18:19] Last week we were at Nashville. It was an amazing event. As for me, I’ve been at CyberArk for six years. I think it was the best one. Of course, Cora AI announcement was the highlight, at least for me and for my team, because we were so heavily involved in the development of these capabilities. Cora AI is an umbrella trademark for all AI capabilities throughout different product lines and throughout different products at CyberArk. So this is not a specific capability. It is an umbrella that is supposed to contain within it all of the developments that we’ve been doing throughout different products. That combine not only generative AI capabilities such as chatbots and generating summaries and others, but also, let’s call it classical ML capabilities where we actually process data, run ETL pipelines, build models, run inferences, etc. So this is basically the umbrella name for all AI capabilities.
David: [00:19:26] Excellent. So before we get into your predictions, your own personal predictions. So we need to make that clear. These are not the predictions of CyberArk or the CyberArk Artificial Intelligence Center of Excellence. In general, how do you see AI and GenAI, for that matter, reshaping the balance between productivity and security in IAM?
Daniel: [00:19:49] The question of productivity versus security has been a long-standing question. Like, where should we invest? Which capabilities should we do more? Should we concentrate more on the security side to make our customers more secure? Or should we be concentrating more on the productivity side and making our customers more productive because everybody’s got so much stuff on their plates. So if you manage to take the stuff off their plates quicker, it would be much more appreciated. And what I think we have discovered is that most of the initiatives have both sides in them. So this is not a sort of a balance between should we do more productivity or should we do more security. But I think in each and every one of the features or capabilities or initiatives or products, we can put some sort of a weight both on the productivity side and on the security side. Even if you take the most security-related capability such as automatic threat prevention. Let’s take that as an example. So it’s obviously a security feature. But then you can argue and say, if we had not have this, somebody would need to actually monitor the activity and manually do this work by themselves. So by having an automated feature like that, this is not only a security feature, but this is also a productivity feature for that person who doesn’t need to do this manually anymore. So there is an argument to be made. On the majority of features that have both productivity and security in them. I will say that potentially chatbots where you ask questions like documentation, etc., are probably more on the productivity side and they don’t really have security value in them for the most part, unless they make you.
David: [00:21:22] Okay, so now that we’ve established you know a lot about AI, let’s move on to the subject matter and the blog that you recently wrote to set the stage for your predictions. And we’re not going to go into all the predictions here today in the podcast. We’re going to, we’re going to leave some of those for, for folks wanting to check them out in the blog. Which we’ll of course put the name and link to in the, the show summary/show notes, whatever they’re calling them these days. I guess to start things off, to set the stage for the predictions. What are the three main pillars of AI in IAM as you’ve defined them in the blog?
Daniel: [00:21:51] Awesome. This is my favorite topic. The first pillar here is the chatbot, or in a more wider sense, AI assistant. This is where I think a lot of productivity gains will be heading. And this pillar is the chatbot. Going to grow into a sort of an assistant that is not only passive and waiting for your input like ChatGPT or like a classical chatbot. But more into an assistant that would give you ideas or prompts or suggestions. For example, you come into your application, your work dashboard, you would be given some recommendations. Hey, David, today, these are the top three things you should be doing. These are the risks you need to be handling today. These are your next tasks. These are your onboarding things that you should be running now. So today, this capability exists, but it exists sort of an outside of the large language model world. So, of course, there are applications and there are capabilities of creating push notifications to a user. We’ve had those for many, many years. But now the expectation is that whenever this notification comes through, you would be able to have a conversation based on that. Let’s say, okay, I need to be onboarding new accounts. Which accounts? How should I be onboarding them? What’s the next thing to do? Other risks, etc. So, there is an expectation of a more interactive work with this assistant. And of course, this assistant will grow to become much more custom-made to you personally and to your organization as a whole. For example, if your organization has just started its journey with CyberArk products, you would be given a certain set of recommendations and next steps. But if your organization has been with CyberArk for 15 years, I think you would be given a different set of recommendations and next steps to do, right? So it depends. So this is sort of a development vector that I see happening and will continue to be evolving in the upcoming few years. In the area of assistance.
David: [00:23:37] Okay, so that’s the first one.
Daniel: [00:23:39] Right, yeah, that’s the first one. And of course CyberArk has released some of the first capabilities right now during our Impact event. So we have a documentation chatbot and we have an assistant chatbot which would allow you to run some commands using natural language. So you would ask the system to do some chores instead of you.
David: [00:23:59] Okay, so we’ve covered the first pillar. What’s the second pillar?
Daniel: [00:24:02] The second pillar is the access policies. So there are several things happening in the area of access policies, or you can call them authorization policies. First of all, of course, there is this trend of having policy recommendations. We’ve released such capability on the Endpoint Privilege Manager last January, where an administrator would be, whenever he’s creating a policy, he would be given recommendations saying, let’s say 70 percent of customers allowed this and 30 percent declined this. It gives you some sort of a ballpark where you’re standing. You can decide whatever you choose, but at least you have some sort of a guardrail or guidance if you wish. And I think more and more products will have similar capabilities in time.
David: [00:24:41] Mm-Hmm.
Daniel: [00:24:42] But also I think in time you will also start seeing automatic, like full policies. Not only one rule at a time, recommendations, but you would potentially be given a full policy. Say, listen, David, customers like you or customers in your segment or your size or your whatever, have like these policies or similar policies to this. This is your template. Start with this, don’t start from blank, start from this. And then it would hopefully give you on the one hand productivity because you will be starting from a template and then it would also give you some sort of a ballpark understanding what the others are doing. So you would be able to drive this collective knowledge and collective understanding how to define things and how to run through this. So this is like policy recommendations area of development.
David: [00:25:27] Right. So what you’re talking about here, just to be clear. You’re talking about things that make, that give you more information, more vision, more of a sort of a purview rather than a machine taking over and doing it for you.
Daniel: [00:25:41] Right. The machine would be driving help. The machine would be suggesting you things based on machine learning and data aggregation, etc. But essentially you would still be driving this, right? You would still need to decide if that’s good for you. You would need to sign off on this. At the very least, right? Technically, it can be feasible, of course, to apply automatically created policies, but I think at least for this upcoming several years, there is still an expectation, like our human expectations, that we will be asked at least regarding the suggested policies. Maybe there will come a day when you say, it’s good enough, just do whatever the machine decides. But even for the auditing reason, even for the reasons of accountability, who is responsible? So you at least need to say, okay, I agree to that. So this is the minimum human in the loop intervention that I expect to see in the upcoming several years.
David: [00:26:44] Okay.
Daniel: [00:26:45] Another area here within the same access policies pillar is something that I personally like very much. I call it intent as policy.
David: [00:26:53] That’s intent as policy you said?
Daniel: [00:26:55] Intent as policy, right? So how do you create a policy based on intent? In general, like in our head, it starts with an intent. So you say, okay, I want to allow this user to do that. I want to deny that user from doing that. But then you need to translate this into technical jargon, into technical language, into the policy that is defined by that product, by that company. But what I think we will start seeing in time is that because generative AI can understand human language so well, and it can discern your actions into human language, there will come a time when you will be able to simply transfer your intent as a policy. So you would not be seeing the technical jargon-based policy in your rule base, but you would just see the policy that says Daniel is allowed to access his AWS accounts these days at these times, or even something like our secure web session product, which monitors and logs users’ interaction on the web page. Consider a policy that says non-administrative users are not allowed to update credit card fields. Like, that’s the rule. That is the rule. Literally. Plain English. And then GenAI would be able to understand on the back end, what are those credit card fields and what is that non-administrative user? So in this sense, I think we will start seeing more evidence of such human, natural language-based rules. So that’s why maybe a better name for that would be natural language as policy, but I think intent is deeper than that because that’s what you want to convey in a policy anyway. So, this is the idea here. And eventually, we will see more automation in policy creation. I mentioned it earlier, but I think this automation in policy creation will be driven not only by similarity or heuristics, but will also be driven by personalization of behavior, of your personal behavior. And of your organizational language or your organizational set of rules or guardrails, right? So this is a little hard to define in specific words, but I think the main idea here is that policies will become more customized to specific people and to specific circumstances.
David: [00:29:03] Really interesting. And for the listeners out there, we’ve already mentioned it, but of course they can go to your blog on the CyberArk blog, predicting the future of AI and identity and access management for a little bit more of a breakdown of when you think these different things are going to happen and a little bit more depth of exactly the steps for all of that. And it’s really, really interesting stuff to wrap things up here to sort of go back to more general aspects of AI, and I guess to say general aspects of GenAI would be Gen Gen AI. What are you most excited about when it comes to GenAI, both professionally and personally in the next upcoming five to ten years?
Daniel: [00:29:43] I think this is an amazing place to be at. I think this is an amazing time to live in. There are so many things happening around. The technology is evolving so quickly. Exciting and somewhat scary at the same time. I believe that it is important to try and stay afloat in this volatile world of artificial intelligence. I think it’s very important to try and stay up to date and it’s important to not be necessarily afraid of what this technology will bring, but it is necessary to stay up to date and try to bring things on board. As far as they are aligned with your, let’s say, security guidelines and as far as they are aligned with your values. There are a lot of gains to be had by bringing on board GenAI solutions. For example, one of the things we did not mention here, but CyberArk has not been afraid to bring on board, for example, GitHub Copilot for all of our developers and actively driving their training and involvement in the GenAI world. So not only as a company that creates products for our customers, but also internally when we look inside. We are very much excited about capabilities that GenAI brings to the table. And we are doing all we can to try and drive the consumption and use of these tools internally as well.
David: [00:30:56] Mm-Hmm.
Daniel: [00:30:57] So that is what I would recommend. I would not recommend to back off. I would not recommend to wait until things settle. I’m afraid they are not going to settle anytime soon.
David: [00:31:07] In brief reality. Don’t try to shut the door on it, as we know very well, there isn’t just one door.
Daniel: [00:31:12] Right. Embrace reality and look ahead and walk towards the future.
David: [00:31:17] Daniel Schwartzer, CyberArk’s Chief Product Technologist and the leader of CyberArk’s Artificial Intelligence Center of Excellence. Thanks so much for coming on to the podcast. Really nice to speak with you.
Daniel: [00:31:25] Thank you so much, David.
David: [00:31:27] Thanks for listening to Trust Issues. If you liked this episode, please check out our back catalog for more conversations with cyber defenders and protectors and don’t miss new episodes. Make sure you’re following us wherever you get your podcasts and let’s see. Oh yeah. Drop us a line. If you feel so inclined questions, comments, suggestions, which come to think of it are kind of like comments. Our email address is trustissues, all one word at cyberark.com. See you next time.