In previous blogs, we wrote about the significant concerns that have been raised with respect to an organization’s use of artificial intelligence (AI) systems.
Now, in what is being called a landmark collaboration, a number of international partners including the U.S. Cybersecurity and Infrastructure Security Agency (CISA), the UK National Cyber Security Centre (NCSC), and the Canadian Centre for Cyber Security have released Guidelines for Secure AI System Development.
These Guidelines are intended for all providers and stakeholders who use AI systems, whether those systems have been created from scratch or built on top of tools and services provided by others, to build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties.
The Guidelines recognize the potential benefits that AI systems offer to society, but highlight the importance for AI to be developed, deployed and operated in a secure and responsible way. This is particularly the case given the novel security vulnerabilities that are raised by AI systems alongside standard cybersecurity threats.
The Guidelines are broken down into four key areas within the AI system development life cycle:
- Secure design – This section contains guidelines that apply to the design stage of the AI system development life cycle. It covers understanding risks and threat modelling, as well as specific topics and trade-offs to consider on system and model design.
- Secure development – This section contains guidelines that apply to the development stage of the AI system development life cycle, including supply chain security, documentation, and asset and technical debt management.
- Secure deployment – This section contains guidelines that apply to the deployment stage of the AI system development life cycle, including protecting infrastructure and models from compromise, threat or loss, developing incident management processes and responsible release.
- Secure operation and maintenance – This section contains guidelines that apply to the secure operation and maintenance stage of the AI system development life cycle. It provides guidelines on actions particularly relevant once a system has been deployed, including logging and monitoring, update management and information sharing.
Please see the Guidelines for more information.
Any organization involved in developing or using any AI system should take appropriate steps to mitigate the risks including having an AI policy and following these new Guidelines as appropriate for your operations.
Our Privacy, Data Protection & Cybersecurity group has wide-ranging experience helping a variety of organizations in this area and we can help you manage the risks in AI. Contact us to learn more.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.