A self-supervised domain-general learning framework for human ventral stream representation

Talia Konkle and G. Alvarez

Nat Commun 13, 491 (2022). https://doi.org/10.1038/s41467-022-28091-4

Anterior regions of the ventral visual stream encode substantial information about object categories. Are top-down category-level forces critical for arriving at this representation, or can this representation be formed purely through domain-general learning of natural image structure? Here we present a fully self-supervised model which learns to represent individual images, rather than categories, such that views of the same image are embedded nearby in a low-dimensional feature space, distinctly from other recently encountered views. We find that category information implicitly emerges in the local similarity structure of this feature space. Further, these models learn hierarchical features which capture the structure of brain responses across the human ventral visual stream, on par with category-supervised models. These results provide computational support for a domain-general framework guiding the formation of visual representation, where the proximate goal is not explicitly about category information, but is instead to learn unique, compressed descriptions of the visual world. It is unknown whether object category learning can be formed purely through domain general learning of natural image structure. Here the authors show that human visual brain responses to objects are well-captured by self-supervised deep neural network models trained without labels, supporting a domain-general account.