Towards a Systematic Approach to AI supply Chain Security
- An AI supply chain introduces novel security complexities and vulnerabilities to a system.
- Securing the AI supply chain is foundational to developing and maintaining organisational and societal resilience.
- A shared AI supply chain taxonomy can help organisations identify risks and translate insights from isolated events into wider resilience planning.
Novel AI Challenges Reshape the Security Landscape
Advanced AI adoption is making AI system resilience a core operational dependency within organisations. However, the AI supply chain presents a complex landscape of opaque and interdependent components that introduce new security vulnerabilities for organisations, sectors and societies.
The AI supply chain includes both tangible assets such as hardware and compute infrastructure, and dynamic entities, such as data flows and deployment environments. Unique cyber vulnerabilities to AI systems such as data poisoning, model obfuscation and indirect prompt injection can also propagate across systems and have broader rippling effects that further complicate the security landscape.
This has raised demand amongst security leaders for the development of common approaches to advancing AI supply chain resilience.
Enhancing Resilience through Common Language
In response to this challenge, a new report from the Laboratory for AI Security Research (LASR) by the University of Oxford and Plexal presents a working taxonomy of the AI supply chain. Designed to foster a shared understanding of what AI systems are made of, who builds them and how they can be made more resilient, this taxonomy seeks to provide a transparent and common language for the AI supply chain across sectors.
While no two AI deployments are the same, this shared language is essential in enabling a precise discussion of risks, dependencies and security challenges across ecosystems. Building on shared principles from similar recent studies, this taxonomy is deliberately intended to support cross-sector understanding and address blind spots by shifting from component level analysis to systemic analysis.
As demonstrated in a series of case studies included in the report, this holistic approach is intended to help organisations assess risk continuously and consistently whilst implementing safeguards at the system level of their AI supply chain.
Next Steps for AI Supply Chain Security
A working taxonomy of the AI supply chain provides the foundation for a more strategic and systematic approach to AI supply chain security, but more work is needed. As AI becomes embedded across critical national infrastructure and commercial systems, future efforts must focus on adapting existing supply chain risk governance and cyber frameworks to the AI context(s).
Progress will depend on improving transparency, clarifying accountability across stakeholders, and creating incentives for secure practices throughout the supply chain. Investment in education and awareness is also essential to build shared understanding and capability as AI systems become more advanced and complex. Ultimately, advancing secure AI adoption will require coordinated, cross-sector collaboration and the development of practical, scalable and adaptable approaches to managing evolving risks.

