LASR Opportunity Call Agentic AI Security Programme

The LASR Agentic AI Security Programme was delivered in 2025 as part of LASR’s early programme activity, focusing on the emerging security challenges associated with agentic AI systems, supported by Cisco and HM Government.

Deadline:
September 15, 2025
Plexal Stratford, Here East, London
Applications now closed
Supported by:
Cisco & HM Government

The LASR Agentic AI Security Programme was delivered in 2025 as part of LASR’s early programme activity, focusing on the emerging security challenges associated with agentic AI systems, supported by Cisco and HM Government.

Agentic AI systems are characterised by increasing autonomy, the ability to interact with other agents, tools or environments, and the capacity to make decisions with limited human oversight. As these systems began to move from research settings into real world deployment, the programme addressed the need for security to be embedded by design.

Date posted
July 1, 2025
Application deadline
September 15, 2025

Key Dates

Applications now closed

The application window for this program has now closed. Please subscribe to our mailing list to stay informed about future programmes.

Opportunity areas in agentic AI security

The programme was structured around a set of opportunity areas where new approaches to AI security were required to support the safe deployment of agentic systems. These opportunity areas included:

  1. Protecting the agent ecosystem and discovery infrastructure

This opportunity area centres on securing the infrastructure that allows AI agents to register, discover, and communicate with each other across networks or organisational boundaries. Another way to look at this is that we are looking for answers to the "DNS and HTTPS of agents" - foundational services that connect diverse agents - to make them resilient against spoofing, tampering, and abuse. 

The goal is to ensure that agents can find each other and interact in a trusted manner, preventing unauthorised agents from infiltrating the ecosystem. Drawing from Cisco's Internet of Agents initiative, we are looking for solutions which ensure that only legitimate, authenticated agents participate and that all interactions are cryptographically protected. 

What are the human-driven attack vectors?

In addition to structural challenges, adversarial threats, such as memory poisoning and agent impersonation, demand explicit countermeasures. Protecting this "DNS and HTTP of agents" requires both cryptographic trust and robust abuse resistance.

  1. Securing confidential compute and RAG architectures for Agentic AI

This opportunity area addresses the secure integration of enterprise data into autonomous agent reasoning, focusing on two key enablers: Retrieval-Augmented Generation (RAG) architectures and confidential computing environments.

The challenge is to ensure that agents only retrieve what they are allowed to, that the retrieved knowledge is accurate and not compromised, and that all processing of sensitive data occurs in a secure and isolated manner.

What are the human-driven attack vectors?

Memory poisoning, intent-breaking, and context manipulation pose growing risks, especially when agent reasoning depends on retrieved or embedded information.

3. Security tooling for agent infrastructure

This opportunity area focuses on how we can secure the platform and pipelines where agents are delivered, deployed, and orchestrated. This is not about looking at the agent's behaviour itself, it is about applying and extending DevSecOps and cloud security principles to the "agent-specific" infrastructure that traditional security tools don't yet fully cover. This includes securing areas such as: shared agent services (e.g. message brokers, memory stores), deployment pipelines and CI/CD for agent code or prompts, authentication and access layers for agents , and the agent orchestration systems (like workflow managers or agent controllers).

The challenge is to build tooling that protects the agent platform itself from compromise, ensuring that only secure, verified agent components run, and that agents operate with least privilege within hardened environments.

What are the human-driven attack vectors?

This challenge targets the underlying agent platforms and pipelines. In additional to infrastructure risks, we must account for tool misuse, unexpected code execution, and other vectors from threat models that exploit agent privilege and connectivity. This is critical to ensuring that agents act securely in complex environments.

By focusing on these areas, the programme supported exploration of practical approaches to reducing risk while enabling innovation in agent based AI.

Supporting SMEs through challenge driven collaboration

Delivered through a structured, challenge driven format, the programme brought together UK SMEs, researchers and public sector stakeholders working at the forefront of AI security. 

Participating companies were supported to develop and refine technical approaches aligned to the programme’s opportunity areas, while engaging with expertise from across government, academia and industry. This collaborative environment enabled SMEs to test assumptions, strengthen their propositions and align their solutions with emerging AI security needs.

Stay connected with the latest LASR opportunities.