The recently released Executive Order on “Safe, Secure, and Trustworthy” use of AI recently made waves in the tech community. At the highest level, it is a meta plan, or a “plan about making plans,” but let’s distill it a bit and consider how it may affect security organizations.
Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy
Like many things under our cyber-umbrella, standards, frameworks, and best practices will emerge around testing and securing AI systems. In the security space, many of us are well versed in NIST lexicon, control frameworks, and compliance. Organizations employing or developing AI systems can expect more of the same. As many (most) orgs will eventually employ some sort of AI capability, security teams and companies will need to be knowledgeable about the requirements for safely and securely using these systems, and how those frameworks and controls can be applied to their own organization’s risk appetite.
Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content
This statement appears to be targeted towards the creators of AI systems and focused on government communications. But just like the individual focused attacks we all face today, we as security practitioners will need to keep a close eye on our own people. AI generated attacks on humans and specific individuals will increase. Defenses will need to improve and the ability to recognize these attacks will become incredibly important. As the standards progress, we will need implement and ensure we are (at a minimum) at the (yet to be defined!) standard. For now, we focus on security fundamentals.
Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software
Perhaps the most interesting and applicable to security teams. We will likely be (already are) seeing an influx of new tools, platforms, and companies that promise to “solve all of our security problems using AI.” Of course, there will be amazing advances, great tools, and better options for security, but there will also be significant amounts of noise and vaporware.
The base problem remains the same: Understanding your security needs and applying the correct mix of people, process, and technology (some with an AI flavor) to bring your organization to an appropriate risk level. It will be interesting and important to keep a close eye on the emerging technologies but not get caught up with every shiny object.
Order the development of a National Security Memorandum that directs further actions on AI and security
And finally, in Marine Corps terms “stand by to stand-by” or “there is more coming.” This is an extremely new space and as new technologies are released, new threats activate, and companies evolve their uses of technology, we need to be ready to rapidly adapt to the changing landscape.
AI and Modern Security Operations
Understanding the implications to the organization’s data landscape, the potential cybersecurity risks, and opportunities for process optimization is crucial to successfully deployed AI. Ascent’s AI Readiness offer equips attendees with the considerations necessary to safely integrate AI into their business. Reach out to firstname.lastname@example.org for more information.