Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

cogs12390-fig-0004Former PAICS researcher Tony Morse has just published, with Angelo Cangelosi, the lead article in the upcoming issue of Cognitive Science.

Why Are There Developmental Stages in Language Learning? A Developmental Robotics Model of Language Development

Most theories of learning would predict a gradual acquisition and refinement of skills as learning progresses, and while some highlight exponential growth, this fails to explain why natural cognitive development typically progresses in stages. Models that do span multiple developmental stages typically have parameters to “switch” between stages. We argue that by taking an embodied view, the interaction between learning mechanisms, the resulting behavior of the agent, and the opportunities for learning that the environment provides can account for the stage-wise development of cognitive abilities. We summarize work relevant to this hypothesis and suggest two simple mechanisms that account for some developmental transitions: neural readiness focuses on changes in the neural substrate resulting from ongoing learning, and perceptual readiness focuses on the perceptual requirements for learning new tasks. Previous work has demonstrated these mechanisms in replications of a wide variety of infant language experiments, spanning multiple developmental stages. Here we piece this work together as a single model of ongoing learning with no parameter changes at all. The model, an instance of the Epigenetic Robotics Architecture (Morse et al 2010) embodied on the iCub humanoid robot, exhibits ongoing multi-stage development while learning pre-linguistic and then basic language skills.

You can find the article available for early view here:  http://onlinelibrary.wiley.com/doi/10.1111/cogs.12390/abstract?campaign=wolearlyview

Present or former PAICS members who would like to feature their recent research on this site should email me with the details.


Multi-sensory integration without consciousness

This morning, Tad Zawidzki drew my attention to the publication on Tuesday of this paper: Multisensory Integration in Complete Unawareness. What Faivre et al report there is exactly the kind of phenomenon that Ryan Scott, Jason Samaha, Zoltan Dienes and I have been investigating. In fact, we have been aware of Faivre et al’s study and cite it in our paper (that is currently under review).

Their work is good, but ours goes further. Specifically, we show that:

  • a) Cross-modal associations can be learned when neither of the stimuli in the two modalities are consciously perceived (whereas the Faivre et al study relies on previously learned associations between consciously perceived stimuli).
  • b) Such learning can occur with non-linguistic stimuli.

Together, a) and b) really strengthen the case against accounts that assert that consciousness is required for multi-sensory integration (e.g., Global Workspace Theory). Some defenders of such theories might try to brush aside results like that of Faivre et al by revising their theories to say that consciousness is only required for higher-level cognition, such as learning; and/or by setting aside linguistic stimuli as a special case of (consciously) pre-learned cross-modal associations which can be exploited by unconscious processes to achieve the appearance of multi-sensory integration. Our results block both of these attempts to save (what we refer to as) integration theories.