テーマ： 「An associative sensorimotor model of multisensory development.」
要 旨： The human foetus and infant develops in a startlingly complex sensory world where its central nervous system is inundated with information from touch, taste, smell, proprioception, vestibular input, audition, and vision, all of which provide information about objects and events in almost uniformly varying neural codes. Nonetheless, a range of
multisensory perceptual abilities (e.g., audiovisual synchrony detection) are evident even in the first days of postnatal life. Such early multisensory competencies have led a number of researchers to argue that multisensory integration is available without significant sensory experience, particularly through the perception of putative “amodal” (or redundantly specified) properties of multisensory stimulation (e.g., Bahrick & Lickliter, 2012). However, I will argue that this perspective underestimates the task facing the developing infant - that of solving which stimuli in separate modalities belong together in a coherent representation of the world (the “crossmodal binding problem”). Highlighting a number of findings across infants, children, and health adults from brain and behaviour, I will put forward an alternative model of multisensory development (an "associative sensorimotor model”), in which the infant and child comes to solve the crossmodal binding problem through sociation of initially separate representations of unimodal stimuli, in the context of their developing bodies and schemas of action. This account, I will argue, not only better accounts for the current data available on multisensory abilities in young infants and children, but is also more biologically plausible than current accounts.