RESEARCH

AI Supply Chain Security: From Visibility to Action

Developed by Plexal and supported by Cisco, this research investigates the evolving security challenges posed by AI supply chains and the limitations of applying traditional software assurance models to AI‑enabled systems.

Access the full document

Please refer for further information:
Download
April 22, 2026

Key Findings

• AI supply chains introduce security risks that traditional software assurance models do not address, due to data‑defined behaviour, opaque model internals, and dynamic deployment environments.

• Limited visibility across data, models, infrastructure, and vendors creates systemic risk, making it difficult to detect vulnerabilities, trace incidents, or coordinate effective remediation.

• Fragmented ownership and weak attribution mechanisms undermine accountability, particularly when AI systems rely on multiple upstream providers across jurisdictions.

• Static transparency artefacts (such as model cards andSBOMs) are insufficient on their own, as they don't reflect the dynamic nature of the AI Supply chain or support continuousvalidation or operational decision‑making.

• Translating visibility into actionable controls iscritical, enabling organisations to respond decisively when dependencies fail, change, or become compromised.

Developed by Plexal and supported by Cisco, this research investigates the evolving security challenges posed by AI supply chains and thelimitations of applying traditional software assurance models to AI‑enabled systems. Its primary goal is to identify why gaps in visibility across data, models, infrastructure, and organisational boundaries prevent organisationsfrom effectively managing risk, responding to incidents, and maintaining operational control as AI systems scale.

The research adopts a qualitative, systems‑level approach, drawing on contemporary policy context, industry practices, and real‑world case studies to analyse AI supply chain vulnerabilities. It examines AI‑specific risk dimensions, including data redefining the system, expanded attack surfaces, structural fragmentation across vendors, and weak attribution mechanisms, and shows how these factors combine to create new forms of systemic risk. Case studies, such as large‑scale supply chain compromises and data poisoning techniques, illustrate how upstream weaknesses can propagate across interconnected AI ecosystems.

The findings indicate that transparency mechanisms such asmodel cards, SBOMs, and vendor attestations, while useful, are insufficientwhen treated as static documentation. The research argues that visibility mustbe operationalised through continuous validation, clearer accountabilityboundaries, and enforceable remediation pathways. By connecting visibility to insight and action, organisations are better positioned to detect emergingrisks, verify supply chain claims, and respond decisively when dependenciesfail or become compromised. This approach supports the development of safer,more resilient, and more trustworthy AI systems, particularly in public sectorand critical infrastructure contexts.

Stay connected with the latest LASR opportunities.