Generative artificial intelligence (AI) has officially arrived at the enterprise and is poised to disrupt everything from customer-facing applications and services to back-end data and infrastructure to workforce engagement and empowerment. Cyberattackers also stand to benefit: 93% of security decision makers expect AI-enabled threats to affect their organization in 2023, with AI-powered malware cited as the No. 1 concern.
Facing this reality, IT and security leaders must strategically capture new business value while mitigating the risk brought by AI-enabled tools. A security-first mindset and the ability to adapt are critical because, as years in the military taught me, the only way to the end is through.
Five Steps for Securely Embracing the Enterprise AI Opportunity
Part of my job as CyberArk Global Chief Information Officer is utilizing new data-driven tools and cloud technologies to support resiliency, speed and scale. It’s an exciting time to be a technologist and I see many potential ways to harness AI technology to advance CyberArk’s mission and growth. In some areas, we’re already doing so. Personally, I believe AI tools can help me be a more productive, focused and impactful leader.
Yet navigating AI’s Wild West is challenging. With no playbook or precedent to follow, information sharing is crucial. In that spirit – and based on our team’s experiences, ongoing peer conversations and market insights – here are five practices for IT and security leaders to consider as enterprise AI use cases expand:
1. Define your organization’s AI position from the start. Is your company already using generative AI at enterprise scale? Perhaps it is just beginning a proof of concept (PoC) to test the waters. Or maybe it has drawn a hard line by blocking ChatGPT and similar tools until regulators can catch up and guardrails are codified. Whatever your organization’s AI position may be, it must be clearly defined and communicated from the top so everyone starts – and stays – on the same page.
2. Open the lines of communication. Establishing AI-specific company guidelines, publishing usage policies and updating employee cybersecurity training curricula are necessary steps. But real dialogue goes both ways.
At CyberArk, employees are encouraged to send their AI-related questions, ideas and requests to a dedicated email address. An AI “tiger team” of cross-functional experts meets bi-weekly to review and respond to each submission, identify high-value use cases and work to create secure models – using AI tools and aligning with organizational policy – that teams can use. As we move forward, this team will play an integral role in tackling emerging challenges and devising creative strategies that help us maximize AI benefits.
3. Revisit the internal software request process. According to the 2023 CyberArk Identity Security Threat Landscape Report, employees in 62% of organizations use unapproved AI-enabled tools that can increase security risk. This highlights the need for IT and security leaders to adapt their approaches or risk being viewed as innovation blockers.
Like most IT departments, my team is experiencing a surge in workforce requests for AI-enabled tools and add-ons. In response, we’ve enhanced our third-party software vetting system to help meet employees’ needs more efficiently while doing our due diligence. But this process doesn’t stop at employee requests. Why? Because “assume breach” also means we must “assume shadow IT” (software downloaded and used without IT’s approval) and “assume clicks” (especially as AI-fueled phishing campaigns become increasingly convincing). We proactively layer endpoint security controls with malware-agnostic defenses to enforce Zero Trust and least privilege and close gaps caused by inevitable human error.
4. Speak the CFO’s language. As technology leaders, we must build operationally efficient platforms and environments – even more so in the current economic climate. As changemakers, we must demonstrate AI’s business value to our CFOs. An honest, rational approach backed by hard data is critical; illustrating how a tool can help advance multiple business priorities is even more powerful.
Here’s an example “pitch” that uses recent stats on the AI-powered developer tool GitHub Copilot: “Early data shows that this tool can help our developers code – and innovate – up to 55% faster. But increased speed is just the start – it can also help us engage our employees more effectively: 75% of developers say the tool helps them feel more fulfilled and able to focus on more satisfying work. This is important since studies have linked higher job satisfaction to stronger employee retention, customer loyalty and company financial performance.”
5. Continuously assess AI threats. This means rigorously assessing all AI-enabled tools before use, continuously assessing all AI-enabled tools in use and having the ability to immediately block and roll back any AI-enabled tool if it’s necessary. It also means constantly thinking like an attacker and concentrating on identities, their greatest opportunity area.
Security researchers have already uncovered numerous ways that threat actors could use AI to improve techniques in the early phases of identity-based attacks. For instance, AI can help them write legitimate-sounding email copy for phishing campaigns, generate malware that evades detection or bypass facial recognition authentication. Just recently, CyberArk Labs used a short clip of my voice – pulled from the only English-language podcast I’ve ever recorded – to create an AI-generated deep fake that could be used for voice phishing. It’s yet another way attackers are innovating to circumvent traditional security controls. If my colleagues could do this in less than five minutes, imagine how easy it would be for an attacker to impersonate a high-profile executive or government leader who’s frequently on TV.
Amplifying Security Approaches with AI
IT and security teams are also using AI to adapt and improve cyber resilience. While human talent remains critical for combatting emerging threats, AI can help bridge some of the gaps caused by the 3.4-million-person cybersecurity worker shortage. CyberArk’s latest research found that 41% of cybersecurity teams use AI to address skills and resource shortages and 47% use AI for automation today.
Generative AI has the potential to transform many security functions as it continues to improve. Take the security operations center (SOC), for example. By automating time-intensive tasks such as triaging level-one threats or updating security polices, hard-working SecOps professionals can focus on more satisfying work. Ultimately, this may help reduce staffing shortage issues while curbing employee turnover and attrition – the second largest contributor to the cyber skills shortage, according to the latest (ISC)2 Cybersecurity Workforce Study.
Technology is constantly changing. Right now, we’re experiencing another wave in a continuous evolution cycle. There will be challenges ahead, but leadership is all about making decisions in the face of uncertainty. With an open mind and unwavering security focus, technology leaders can confidently navigate these uncharted waters and embrace new opportunities.
Omer Grossman is the global chief information officer at CyberArk. You can check out more content from Grossman on CyberArk’s Security Matters | CIO Connections page.