Tech gurus say AI, ML, and LLMs are revolutionary, the most significant disruptors of our time. But in another sense, AI is just the next tool. You need to stay at pace with its development to defend your organization from threat actors who will use every tool against you.
I’ve talked to three customers within the last month who told me their teams want to get away from open AI platforms. The security risk is too high, they’ve said. We can’t encourage employees to plug our proprietary information into a data farm open to curious developers.
And I agree, we shouldn’t. But it’s just as dangerous if your employees are sharing documents through unsecured Google Docs or DropBox accounts because current corporate infrastructure is clunky and outdated. Posting a confidential compliance plan to Reddit or 4Chan and requesting feedback isn’t better than copy-pasting to ChatGPT for a fill-in-the-gaps exercise.
Security vs. Security Use Cases
I’ve noticed many thought leaders don’t distinguish between a general call for the security of AI and the security use cases for AI within your enterprise security environment. The first is something that can’t be helped if you don’t have the compliance rules to back up your company policy. We need to consider AI as an “ecosystem” beyond just a single LLM. The second is a far more practical insight, one I’d argue is vital for the future of cybersecurity. We are writing another post on how you can measure your own maturity in both instances.
LLMs are the emerging, most capable technology available to blue team defenders. It’s a built-in threat intel aggregator, a way to securely check code for malicious errors, and it can support each cell of a modern SOC.
Shadow IT and LLMs
I’d define shadow IT as technology used by your employees that’s not endorsed by corporate IT. By necessity it must be minimized to retain overall data security. But shadow IT isn’t going to disappear by a company policy outlawing power point generation on Midjourney. If your organization doesn’t already have foundational data security practices for shadow IT discovery, we’d urge you to pursue them.
In our increasingly IoT environment, it’s crucial to your business success that you avoid methods by which a threat actor could compromise your security practices and sidestep a threat actor coopting your employees’ use of shadow IT.
Democratization of Technology
In the U.S. we’re passionate to protect the freedom our system of government provides as a democratic republic. The first of those terms, democratic, means accessible to all. AI and LLMs have the capacity to drive massive personalization in our search engines, social media, and buying decisions. For employees, it can be a path toward productivity and time efficiency through rapid information synthesis, not just the replication of repetitive tasks.
We believe human intelligence is integral to any application of machine learning—generative or otherwise. Complex algorithms, even ones trained to process information like a human, cannot compete with our innate ability to think critically.
So is Ascent for or against AI, ML, and LLMs?
But for those who haven’t explored AI, who are still asking if this is a wise path to take or whether regulation is necessary, I’d encourage you to secure the use cases in which your employees will be leveraging the ChatGPTs and the DALL-Es of the internet. Don’t let shadow IT win the day.
We’re proponents of technology innovation. Our firm is built on what we believe is the future of cyber: technology-enabled services. We project LLMs will be a part of the daily, hourly activities of a cybersecurity analyst just 12 to 24 months from now (and we are actively building the integrations to make that possible). We’ve written several blogs outlining practical applications for LLMs within SecOps, and we’re writing more.
If you would like to pursue data security and leverage AI and ML as an organization, please reach out to us at firstname.lastname@example.org.