New Findings on Neural Relations between Music and Language Processing, Aniruddh Patel (Tufts University)
To what degree do music and language share processing resources in the brain? This question is relevant to practical issues (such as the use of musical activities to improve linguistic processing in children with developmental language problems) and to long-standing debates about the mind, e.g., over the evolutionary foundations of human musicality. I have argued that music and language engage domain-specific representations (such as words vs. chords) yet can share neurocognitive mechanisms which operate on these representations when processing musical and linguistic sequences (Patel 2012). Ray Jackendoff’s insightful writings on music and language (e.g., Jackendoff, in press) emphasize the idea of domain-specific representations in the two domains, and suggest that shared cognitive mechanisms are likely broadly shared across several cognitive domains rather than specifically shared by language and music. I will discuss this idea in light of new findings revealing similar neural (event-related potential) responses associated with predictive processing of coherent melodies and sentences with no anomalous notes or words.
Patel, A.D. (2012). Language, music, and the brain: A resource-sharing framework. In: P. Rebuschat et al. (Eds.), Language and Music as Cognitive Systems (pp. 204–223). Oxford: Oxford University Press.
Jackendoff, R. (in press). Perspectives from linguistics: Mental representations for language and music. In: D. Sammler (Ed.), The Oxford Handbook of Language and Music.