PhD Candidate, Interactive Artificial Intelligence @ University of Bristol
I am a PhD student at the UKRI funded Interactive Artificial Intelligence CDT at the University of Bristol under the supervision of Prof. Peter Flach, Dr. Telmo de Menezes e Silva Filho and Dr. Oliver Ray. My work mainly focuses on developing techniques to build robust and trustworthy autonomous agents. My primary research interest involves using Symbolic AI tools to build agents capable of synthesising causal world models in the language of First-Order Logic. I am also investigating in-context weight-space learning for physics-informed models.
I completed my undergraduate degree in Maths & Physics at the University of Bath in 2023. During my studies, I worked as a Data Scientist at Lukkap, where I developed data applications to automate analysis and extract insights, greatly optimising previous workflows. For my final year dissertation, I developed a CNN-based system to detect dangerous sea wave events on the Cornish coast using radar data, in collaboration with the Met Office. I joined the IAI CDT after graduation to further pursue a career in AI research.
I will organise the 2026 IAI-ProAI Spring Research Conference.
JAN
2026
First paper accepted at ICLR 2026.
publications
Weight-Space Linear Recurrent Neural Networks [ICLR 2026][PDF]
| [CODE][ICLR 2026]
Roussel Desmond Nzoyem, Nawid Keshtmand, Enrique Crespo Fernandez, Idriss Tsayem, Raul Santos-Rodriguez, David A.W. Barton, Tom Deakin
Abstract:
We introduce WARP (Weight-space Adaptive Recurrent Prediction), a simple yet powerful model that unifies weight-space learning with linear recurrence to redefine sequence modeling. Unlike conventional recurrent neural networks (RNNs)
which collapse temporal dynamics into fixed-dimensional hidden states, WARP explicitly parametrizes its hidden state as the weights and biases of a distinct auxiliary neural network, and uses input differences to drive its recurrence. This brain-inspired formulation enables efficient gradient-free adaptation of the auxiliary network at test-time, in-context learning abilities, and seamless integration of domain-specific physical priors. Empirical validation shows that WARP matches or surpasses state-of-the-art baselines on diverse classification tasks, featuring in the top three in 5 out of 6 real-world challenging datasets. Furthermore, extensive experiments across sequential image completion, multivariate time series forecasting, and dynamical system reconstruction demonstrate its expressiveness and generalisation capabilities. Remarkably, a physics-informed variant of our model outperforms the next best model by more than 10x. Ablation studies confirm the architectural necessity of key components, solidifying weight-space linear RNNs as a transformative paradigm for adaptive machine intelligence.
... See More
Continual learning and refinement of causal models through dynamic predicate invention [Arxiv][PDF][Sumbitted to UCLR Workshop @ ICLR 2026] Enrique Crespo-Fernandez, Oliver Ray, Telmo de Menezes e Silva Filho, Peter Flach
Abstract:
Efficiently navigating complex environments requires agents to internalize the underlying logic of their world, yet standard world modelling methods often struggle with sample inefficiency, lack of transparency, and poor scalability.
We propose a framework for constructing symbolic causal world models entirely online by integrating continuous model learning and repair into the agent's decision loop, by leveraging the power of Meta-Interpretive Learning and predicate invention to find semantically meaningful and reusable abstractions, allowing an agent to construct a hierarchy of disentangled, high-quality concepts from its observations. We demonstrate that our lifted inference approach scales to domains with complex relational dynamics, where propositional methods suffer from combinatorial explosion, while achieving sample-efficiency orders of magnitude higher than the established PPO neural-network-based baseline.
... See More
resources
all software releases of the above projects can also be found here!