NIST has launched an idea paper for brand spanking new management overlays to safe AI programs, constructed on the SP 800-53 framework. Be taught what the brand new framework covers and why consultants are calling for extra detailed descriptions.
In a major step in the direction of managing the safety dangers of synthetic intelligence (AI), the Nationwide Institute of Requirements and Expertise (NIST) has launched a brand new idea paper that proposes a framework of management overlays for securing AI programs.
This framework is constructed upon the well-known NIST Particular Publication (SP) 800-53, which many organizations are already acquainted with for managing cybersecurity dangers, whereas these overlays are basically a set of cybersecurity tips to assist organizations.
The idea paper (PDF) lays out a number of situations for the way these tips might be used to guard various kinds of AI. The paper defines a management overlay as a approach to customise safety controls for a particular know-how, making the rules versatile for various AI purposes. It additionally contains safety controls particularly for AI builders, drawing from current requirements like NIST 800-53.
On this picture, NIST has recognized use circumstances for organizations utilizing AI, resembling with generative AI, predictive AI, and agentic AI programs.
Whereas the transfer is seen as a optimistic begin, it’s not with out its critics. Melissa Ruzzi, Director of AI at AppOmni, shared her ideas on the paper with Hackread.com, suggesting that the rules must be extra particular to be actually helpful. Ruzzi believes the use circumstances are a superb place to begin, however lack detailed descriptions.
“The use circumstances appear to seize the most well-liked AI implementations,” she mentioned, “however they must be extra explicitly described and outlined…” She factors out that various kinds of AI, resembling these which can be “supervised” versus “unsupervised,” have completely different wants.
She additionally emphasizes the significance of information sensitivity. In response to Ruzzi, the rules ought to embrace extra particular controls and monitoring primarily based on the kind of information getting used, like private or medical data. That is essential, because the paper’s aim is to guard the confidentiality, integrity, and availability of knowledge for every use case.
Ruzzi’s feedback spotlight a key problem in making a one-size-fits-all safety framework for a know-how that’s evolving so rapidly. The NIST paper is an preliminary step, and the group is now asking for suggestions from the general public to assist form its ultimate model.
It has even launched a Slack channel the place consultants and group members can be a part of the dialog and contribute to the event of those new safety tips. This collaborative method reveals that NIST is critical about making a framework that’s each complete and sensible for the true world.