LSA.108 | Explaining Phonological Universals
Janet Pierrehumbert and Paul Smolensky
course web site: http://lsa.dlp.mit.edu/Class/108
Despite differences in their detailed phonetic patterns and in the specifics of their phonological grammars, languages display striking abstract similarities. How are these universals to be explained? We will examine explanations based on statistical learning theory, computational studies of dynamical systems including neural networks, biological studies of comparative cognition and genetics, and results on the acoustics, anatomy, perception, and motor control of speech. The course will be based on case studies in the area of categorization, markedness, sequential organization, and word formation. Example phenomena relevant to universals and putative explanations include:
* Phonological segments: Segments arise as stable phonetic distributions in a dynamic articulation/perception loop. Soft implicational universals are derivable by a stability analysis.
* Learnability: (and Richness of the Base): Learning by statistical induction can advance without negative evidence. Induction requires a priori assumptions by the learner about the hypothesis space.
*Optimization: The dynamical properties of neural networks and the realization of symbol structures as superimposed patterns of distributed neural activity mean that the grammar takes the form of an optimization.
*Non-cumulative and cumulative effects: Neural networks capable of fast, accurate sequential coding display the strict domination interactions we observe in phonological parsing. Distributed lexical representations yield gradient effects of similarity and frequency, including gradient phonotactics.
The course will be based on technical articles, not programmatic position papers.
Prerequisites: Students enrolling for credit will be expected to have a background including graduate-level introductions to phonology and phonetics; knowledge of calculus and statistics will also be extremely helpful.