Addressing real‑world AI security needs
As AI continued to become embedded in critical and high‑stakes systems, LASR Validate focused on the need for technologies that are secure, resilient and trustworthy. The programme supported SMEs operating at the intersection of AI and security to better understand priority problem areas, refine their technical approaches and align their solutions with real‑world requirements.
LASR Validate provided a structured environment for SMEs to explore AI security challenges, including issues related to robustness, assurance and deployment risk, while benefiting from insight across government, academia and industry.

Connecting SMEs with the AI security ecosystem
A core feature of LASR Validate was its emphasis on collaboration. Through the programme, SMEs were connected with:
- Researchers and technical experts working across AI and security
- Public‑sector stakeholders shaping AI security priorities
- Industry partners with experience deploying AI in high‑stakes environments
This collaborative model enabled participating companies to test assumptions, strengthen their propositions and build relationships to support future development and adoption.
Supporting innovation through collaboration
Delivered as part of LASR’s wider mission, Validate demonstrated LASR’s role as a platform for collaboration between government, academia and industry. By supporting SMEs at an early stage, the programme helped surface promising AI security capabilities and contributed to a stronger, more connected AI security ecosystem in the UK.
Insights generated through LASR Validate informed subsequent LASR activity, helping to shape future programme design, research priorities and ecosystem engagement.
Laying the foundations for future LASR programmes
LASR Validate marked the starting point for LASR’s programme activity, laying the foundations for future initiatives focused on AI security research, innovation and commercialisation. As LASR’s first programme, Validate demonstrated the value of targeted, challenge‑driven support for SMEs working on some of the most pressing AI security issues.
Meet the cohort
Aeris-UK
Aeris-UK bridges AI, advanced modelling and operational needs, emphasising modularity, cost-effectiveness and reliability for deployment across diverse contexts, from battlefield operations to critical infrastructure protection. The team has developed an innovative capability called SATORI – a simulation tool designed to analyse vulnerabilities, quantify risks and enhance system resilience.
eCora
eCora has created a secure-by-default platform for new or existing applications, which can be wrapped into a security container that allows them to be deployed into untrusted or hostile environments. It uses underlying hardware innovations in trusted computing and confidential computing, enabling workloads to be deployed as a black box that can be used as intended but with no ability for users or hackers to see inside.
Fendr
Fendr launched having witnessed sensitive code leak into a large language model, recognising that while generative AI tools are accelerating the way users understand, write and debug code, they’re also creating data vulnerabilities and scope for cyber attacks. The company builds tools that enable secure AI usage by intelligently monitoring and protecting data.
Fuzzy Labs
Fuzzy Labs is focused on advancing and innovating open-source machine learning operations (MLOps) solutions to streamline AI model deployment and make a positive impact. The aim is to empower data scientists to productionise AI models and make it easy for them to collaborate with one another, working more efficiently together with time efficiencies and fewer errors.
Pytilia
Pytilia acknowledges that as AI models are deployed into business-critical functions, there’s still a need for human analyst review of the output or any anomalies detected, due to high volumes of false positive alerts, leading to inefficient processing and excessive analyst workloads. Pytilia is working to solve this with a feedback loop engine to reduce the false positive alerts created by AI models, which will learn characteristics of human analyst feedback to filter and prioritise alerts.
Syncrosis
Syncrosis has developed the Helios Matrix, its flagship technology platform designed to unify fragmented intelligence systems, deliver real-time situational awareness and empower decision-makers with actionable insights. It detects and neutralises malicious inputs such as poisoned queries or prompt injections and secures AI systems across their entire lifecycle, integrating continuous data validation and tamper-proof pipelines to eliminate vulnerabilities.


.png)
