Research

I build learning systems shaped by biological constraints—developmental curricula, active sensing, recurrent computation, and continual adaptation—and validate them against behavior and neural measurements.


Developmental constraints on learning learning history as an engineering prior

Infant visual diet

Developmentally-inspired shape bias in artificial neural networks
DVD: developmental shape bias

With: Zejin Lu, Radoslaw Cichy, Tim Kietzmann

Inspired by the Adaptive Initial Degradation hypothesis, we trained ANNs with a graded coarse-to-fine image diet and found strongly shape-biased classification behavior, plus distortion/adversarial robustness.

Active perception and world modeling sampling + prediction + control

Sequence model interpretability

Emergent path integration and binding through sequential prediction
World-Model

With: Linda Ventura, Victoria Bosch, Tim Kietzmann

Predicting the next input yields representations that encode relational structure amongst the inputs: a RNN path integrates and binds the input tokens to their absolute locations in 2D scenes, in-context.

Scene representation

Glimpse prediction for human-like scene representation
GPN

With: Adrien Doerig, Alexander Kroner, Carmen Amme, Tim Kietzmann

Predicting the next glimpse features (given the saccade) coaxes a network to encode co-occurrence and spatial arrangement in a visual-cortex-aligned scene representation.

Controlling attention

Assessing the emergence of an attention schema in object tracking
Attention schema

With: Lotta Piefke, Adrien Doerig, Tim Kietzmann

In cluttered object tracking with RL, an agent learns an explicit encoding of attentional state (an “attention schema”), most useful when attention cannot be inferred from the stimulus.

Adaptation and cognitive flexibility stability, plasticity, and rule switching

Rule inference in NNs

Flexible rule learning in machines
Flexible rule learning

With: Rowan Sommers, Daniel Anthes, Tim Kietzmann

Inspired by Hummos, we built an image-based Wisconsin Card Sorting Task variant and found behavior suggesting sparks of cognitive flexibility: compositional rule inference in activity space.

Continual learning and drift

Structured representational drift aids continual learning
Representational drift

With: Daniel Anthes, Peter König, Tim Kietzmann

Readout misalignment due to learning-induced drift is a core continual-learning problem. Constraining drift to the readout null-space helps networks stay both stable and plastic.

Feedback and recurrent computation time, inference, correction

Representational dynamics in RCNNs

How does recurrence interact with feedforward processing in RNNs?
BLT arrangement

With: Adrien Doerig, Tim Kietzmann

The feedforward sweep instantiates a representational arrangement that dovetails with a recurrence-induced “equal movement” prior, enabling corrected classifications.

Decluttering due to recurrence

Recurrent operations in neural networks trained to recognise objects
Recurrent flow

With: Giacomo Aldegheri, Tim Kietzmann

Recurrent flow carries category-orthogonal feature information (e.g., location) used iteratively to constrain subsequent category inferences.

Attention, search, and selection in scenes what gets amplified, where, and why

Modulation as routing

Attentional Routing is as effective as Direct Access
Attentional routing vs direct access

With: Johannes Singer, Radoslaw Cichy, Tim Kietzmann

Attentional routing can push task-relevant information through the network as effectively as “direct access,” challenging claims about the necessity of direct access for explaining behavior.

Task-dependence of visual representations

Task-dependent characteristics of neural multi-object processing
Task-dependent multi-object processing

With: Lu-Chun Yeh, Marius Peelen

The link between multi-object displays and isolated-object representations is task-dependent: same/different relates earlier; object search relates later in MEG/fMRI.

Characterising search templates

Size-dependence of object search templates in natural scenes
Size dependence in search

With: Surya Gayet, Elisa Battistoni, Marius Peelen

Search templates encode identity and size; size is inferred from location in scenes, and is entangled with identity in the template.

Implicitly learning distractor co-occurrence

Statistical learning of distractor co-occurrences facilitates visual search
Distractor co-occurrences

With: Genevieve Quek, Marius Peelen

Increased search efficiency among co-occurring distractors likely reflects faster/more accurate rejection of a distractor’s partner as a possible target.

High-level feature-based attention

Bodies as features in visual search
Bodies as features

With: Marius Peelen

Feature-based attention modulates fMRI representations of body silhouettes presented in task-irrelevant locations in high-level visual cortex.

Attentional modulation in NNs

The function of early task-based modulations in object detection
Early task modulations

With: Giacomo Aldegheri, Marcel van Gerven, Marius Peelen

Early bias/gain modulation alleviates later capacity limits; optimized modulations look like tapping a superposition of networks rather than classic feature-similarity gain.

Scene context

The influence of scene information on object processing
Scene influences on object processing

With: Ilze Thoonen, Sjoerd Meijer, Marius Peelen

Scene co-variation biases categorization, but across 4 experiments we found no evidence that task-irrelevant scenes boost sensitivity for detecting co-varying objects.

Characterizing perception representational structure, priors, and report

Ventral stream organization

The nature of the animacy organization in human ventral temporal cortex
Animacy organization

With: Daria Proklova, Daniel Kaiser, Marius Peelen

Animacy organization is not fully driven by visual-feature differences; it also depends on inferred factors like agency quantified behaviorally.

Priors in perceptual report

Perception of rare inverted letters among upright ones
Letter illusion

With: Jochem Koopmans, Genevieve Quek, Marius Peelen

In a Sperling-like task, people report occasionally-present and absent inverted letters as upright to the same extent; expectation-driven illusions may be post-perceptual.

Engineering computational systems tools, controllers, and ML systems

Brain ↔ language interface

Brain reading with a Transformer
Cortext

With: Victoria Bosch, Daniel Anthes, Adrien Doerig, Peter Konig, Tim Kietzmann

fMRI responses to natural scenes condition word generation in a Transformer, enabling flexible readout of semantic properties like object class and numerosity.

NLP / graph search

Reverse dictionary using a word-definition based graph search
Reverse dictionary

With: Varad Choudhari

Reverse dictionary via n-hop reverse search on a definition graph. Matches SOTA on ~3k lexicon; doesn’t scale well to ~80k.

SNN control

A Spiking Neural Network as a Quadcopter Flight Controller
Quadcopter SNN

With: Sukanya Patil, Bipin Rajendran

Model-based control for velocity–waypoint navigation; plus modular SNNs for real-time arithmetic using plastic synapses.