a hippocampal · memristive · neuromorphic architecture
· study of a synaptic substrate ·
Click any numbered anchor to inspect its role
co-localised memory and compute
A self-organising memristor array where each intersection stores a synaptic weight and performs analogue multiply-accumulate in place. Energy is spent only when events flow — not on shuffling activations.
The core breakthrough of MEMPHIS lies in the integration of hippocampal-inspired computational principles with a self-organising memristive hardware substrate, enabling a new class of ultra-low-power, adaptive computing systems in which memory and computation are co-localised.
This is a fundamental departure from conventional architectures, in which processing and memory sit on opposite sides of a bus. MEMPHIS implements a two-phase computational paradigm — online event-driven processing for real-time interaction; offline replay-driven consolidation for memory optimisation — inside the same physical system.
The decisive validation: a small-scale memristive spiking network (CA3-CA1) performs associative recall and replay-driven consolidation, improving task performance after offline processing without further external input.
Six labelled circuit motifs anchor the device-co-design problem. Hover any node to read its role; the incident edges light up to show its couplings to the rest of the substrate.
Move the cursor across the curve. A pre-synaptic spike that precedes a post-synaptic spike strengthens the connection; reverse the order and the connection weakens. MEMPHIS demands the memristive substrate produce this window intrinsically from device physics, not from a software training rule applied over the top.
Online events drive sparse, salient computation. Offline intrinsic dynamics replay and consolidate — without external input. One physical substrate, two regimes.
Per-synaptic-event energy on a log axis. GPU AI sits six orders of magnitude above mammalian cortex. MEMPHIS targets the biological benchmark — three orders below today’s best neuromorphic silicon.
Distributed, event-driven computation inspired by biological circuits — not sequential and energy-intensive.
Continuous, replay-consolidated adaptation that addresses catastrophic forgetting without separating training from deployment.
Hardware-embedded sleep-like processes — replay and synaptic scaling — for long-term memory formation and restructuring.
Self-organising memristive systems as a physically grounded implementation of synaptic plasticity, targeting competitive energy efficiency and high integration density.
Biologically-inspired modulatory pathways for prioritisation and adaptive memory processing, beyond current neuromorphic implementations.
A hippocampal-inspired neuromorphic system with integrated learning and memory consolidation is a concrete step toward post–von-Neumann computing — paradigms in which intelligence emerges from the interaction between computation, memory and physical substrate, not from their separation.
The proposed system establishes three things at once: the scientific basis for replay-driven learning in artificial systems, the technological feasibility of ultra-low-power adaptive hardware, and a scalable framework for future brain-inspired architectures in AI and robotics.
The project focuses on a constrained hippocampal module — but the principles developed here are directly extendable to more complex cognitive systems, embedded inside the dynamics of the physical hardware itself.
Whether memristive devices can be matched and stabilised at the precision required by the replay-driven dynamics. The biology demands a tighter device-to-device tolerance than today’s memristive arrays routinely deliver; the engineering question is whether self-organisation can close that gap inside the operating regime, not whether it must.