Artificial Intelligence (AI) is rapidly transforming our world, with applications impacting everything from healthcare and finance to transportation and entertainment. This rapid growth also presents a need for robust security framework to mitigate potential risks associated with AI systems. Recognizing this need, Google introduced the Secure AI Framework (SAIF) last year, laying the groundwork for secure AI development. Building upon this foundation, the company launched a new industry-wide initiative called the Coalition for Secure AI (CoSAI).
CoSAI
CoSAI brings together leading technology companies like Amazon, IBM, Microsoft, and OpenAI, alongside other prominent organizations. Hosted by OASIS Open, a recognized standards body, CoSAI aims to develop comprehensive security measures to address the unique challenges posed by AI. The coalition’s focus extends beyond immediate threats, aiming to establish best practices that can adapt and evolve alongside future AI advancements. This collaborative effort signifies a critical step towards ensuring the safe and responsible development and deployment of AI technologies.
Addressing Core Security Concerns
CoSAI has identified three initial workstreams to tackle in collaboration with industry experts and academic institutions:
- Software Supply Chain Security for AI Systems: This workstream focuses on securing the software development lifecycle for AI models. By leveraging existing security principles like SLSA Provenance, CoSAI aims to ensure a secure and traceable development process. This includes tracking the origin and modifications made to AI models throughout the software supply chain, allowing for the identification of potential vulnerabilities.
- Preparing Defenders for a Changing Cybersecurity Landscape: Security professionals face a unique challenge when dealing with AI-specific threats. CoSAI’s efforts in this area involve developing a dedicated security framework. This framework will equip defenders with the knowledge and tools to identify and address evolving AI security risks. Furthermore, the framework will be adaptable, ensuring mitigation strategies remain effective even as offensive AI cybersecurity techniques advance.
- AI Security Governance: Establishing best practices for governing AI security is another crucial aspect of CoSAI’s work. This workstream involves developing resources to guide practitioners in managing, monitoring, and reporting on the security of their AI products. These resources will include a taxonomy of potential risks and controls, checklists for comprehensive security assessments, and scorecards to measure and track security posture.
CoSAI Promoted Partnering for Responsible AI
The coalition recognizes the importance of collaboration in addressing the complex challenges of AI security. They plan to work alongside other organizations, such as the Partnership on AI and the Open Source Security Foundation, to promote responsible AI development. This collaborative approach ensures a broader perspective and fosters the development of comprehensive solutions. CoSAI is committed to evolving its efforts alongside the continuous development of AI technologies. This commitment necessitates staying ahead of emerging threats and adapting risk management strategies accordingly. By fostering collaboration and encouraging industry participation, CoSAI aims to create a secure and trustworthy future for AI.
CoSAI’s Mission
The success of CoSAI hinges on active participation from the broader AI community. Individuals and organizations can contribute to this critical initiative by visiting the coalition’s website (coalitionforsecureai.org) to learn more about their work and explore potential involvement opportunities. Additionally, Google’s Secure AI Framework page serves as a valuable resource for those interested in delving deeper into AI security best practices.