
As the AI security conversation continues to evolve, it’s crucial to keep engaging in the dialogue. Our recent LASR Lates session in Cheltenham brought together a diverse panel of experts to explore the challenges and opportunities in securing AI systems. The evening featured insights from key figures in the field including:
- Holly Smith, Innovation Lead at Plexal (Moderator)
- Louise Cushnahan, Head of Innovation at LASR partner CSIT’s CyberAI Hub, part of Queen’s University Belfast
- Dave Palmer, an investor in cyber security and AI startups at Ten Eleven Ventures, who has years of experience at Darktrace
- Darren Borland, Senior Software Engineer at Pytilia and is in the LASR Validate cohort, our first programme designed to support innovators with their development of AI security products
Key Takeaways: Data Provenance, Explainability, and AI Security Lifecycles
A major theme of the discussion was data provenance—understanding how AI models are built, what data they rely on, and how to secure those foundations. As Darren highlighted, vulnerabilities often emerge from the training phase itself, making it critical to embed security from the outset.
Explainability—AI’s ability to justify its decision-making—was another key focus. Industries such as financial services, cybersecurity, and healthcare depend on trust and transparency, making explainability a non-negotiable aspect of AI security.
AI Security: A Lifecycle Challenge, Not Just a Software Problem
From an investment perspective, Dave Palmer underscored that AI security is not just about pre-deployment safeguards—it’s a continuous process. With major startups like HiddenLayer and Protect AI tackling different phases of AI security, the challenge is to build AI systems that can defend against adversarial threats throughout their lifecycle.
Pytilia, part of our LASR Validate programme, is tackling a critical but often-overlooked issue: securing feedback loops in AI systems. AI security isn’t just about preventing attacks; it’s also about ensuring that human input into AI remains reliable and resilient against manipulation.
Collaboration is Key
Reflecting on the importance of collaboration, Louise Cushnahan emphasized that academic research often identifies AI security challenges well before they gain industry-wide attention. The CyberAI Hub has been working on AI security since 2023, engaging in projects with major players like Thales and NVIDIA. Expanding this research-industry model across the UK is vital to ensuring that cutting-edge security solutions reach the wider economy.
AI Security: An Urgent Challenge and an Exciting Opportunity
As AI capabilities advance, so do the threats. A pressing question from the audience was whether AI is introducing entirely new types of security risks. The panel’s consensus? Absolutely. AI-powered attacks are becoming faster, more sophisticated, and harder to detect, raising the stakes for cybersecurity professionals.
Looking ahead, Dave and Darren explored the implications of artificial general intelligence (AGI) and AI agent networks. As AI systems become more autonomous and interact unpredictably, securing them will only become more complex. The key challenge is ensuring robust security measures that can adapt to evolving threats and unpredictable AI behaviors.
At LASR Lates Cheltenham, the message was clear: AI security is both an urgent challenge and a huge opportunity. If we get it right, we can develop AI systems that are not just powerful, but also secure, transparent, and resilient.
The conversation doesn’t stop here—it’s just getting started. Want to see a LASR Lates event near you? Let us know.

