Probabilistic models have thoroughly reshaped computational linguistics and continues to profoundly change other areas in the scientific study of language, ranging from psycholinguistics to syntax and phonology and even pragmatics and sociolinguistics.
This change has included (a) qualitative improvements in our ability to analyze complex linguistic datasets and (b) new conceptualizations of language knowledge, acquisition, and use. For the most part, these changes have occurred in parallel, but the same theoretical toolkit underlies both advances. In this lecture I give a concise introduction to this toolkit, covering the fundamentals of contemporary probabilistic models in the study of language, with examples including phoneme identification, perceptual magnet effects, and simple hierarchical models.
This lecture includes content of theoretical interest in its own right, as well as tools and concepts that are fundamental to the other three lectures of the series.