FNPS Society

Main Menu

  • Home
  • Amalgamation
  • Terms of trade
  • Monotonic
  • G-8
  • Financial Affairs

FNPS Society

Header Banner

FNPS Society

  • Home
  • Amalgamation
  • Terms of trade
  • Monotonic
  • G-8
  • Financial Affairs
Monotonic
Home›Monotonic›Emergence of a compositional neural code for written words: Recycling of a convolutional neural network for reading

Emergence of a compositional neural code for written words: Recycling of a convolutional neural network for reading

By Richard Lyons
November 8, 2021
20
0

Importance

Learning to read results in the formation of a specialized region in the human ventral visual cortex. This region, the Visual Word Form Zone (VWFA), selectively responds to written words more than other visual stimuli. However, how the neural circuits at this site implement invariant written word recognition remains unknown. Here, we show how an artificial neural network originally designed for object recognition can be recycled to recognize words. Once literate, the network develops a sparse neural representation of words that mimics several known aspects of cognitive neuroscience of reading and leads to accurate predictions of how a small set of neurons implements the orthographic step of acquisition. reading using a compositional neural code.

Abstract

The Visual Word Form Zone (VWFA) is a region of the human inferotemporal cortex that emerges at a fixed location in the occipitotemporal cortex during reading acquisition and consistently responds to written words in literate individuals. According to the neural recycling hypothesis, this region results from the reassignment, for letter recognition, of a subpart of the ventral visual pathway initially involved in the recognition of faces and objects. Moreover, according to the biased connectivity hypothesis, its reproducible localization is due to pre-existing connections of this sub-region to the areas involved in the processing of spoken language. Here, we evaluate these assumptions in an explicit computational model. We trained a deep convolutional neural network of the ventral visual pathway, first to categorize images, and then to recognize words written invariantly for case, font, and size. We show that the model can take into account many properties of VWFA, especially when a subset of units has biased connectivity to word output units. The network develops a sparse and invariant representation of written words, based on a restricted set of selective reading units. Their activation mimics several properties of VWFA, and their lesion causes a specific reading deficit. The model predicts that, in literate brains, written words are encoded by a compositional neural code with neurons tuned to either individual letters and their ordinal position relative to the start or end of the word or to pairs of letters. (bigrams).

Footnotes

    • Accepted September 10, 2021.
  • Author contributions: research designed by TH, AA, LC and SD; TH and AA have done research; TH, AA and SD analyzed the data; and TH, AA, LC and SD wrote the article.

  • Editors: MZ, Universita degli Studi di Padova; and KP, Haskins Laboratories.

  • The authors declare no competing interests.

  • This article contains additional information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2104779118/-/DCSupplemental.

Related posts:

  1. Research reveals history of migraine associated with more severe hot flashes
  2. Cavitation controls droplet size in elastic media
  3. Electrochemical removal of amphoteric ions
  4. Beijing targets megastars and their fan clubs

Categories

  • Amalgamation
  • Financial Affairs
  • G-8
  • Monotonic
  • Terms of trade
  • TERMS AND CONDITIONS
  • PRIVACY AND POLICY