Jeremy Steffman
Northwestern Linguistics
✉
✎
Northwestern Univeristy
Dept. of Linguistics
2016 Sheridan Road
Evanston, IL 60208
☞
CV
I'm a postdoctoral fellow at Northwestern University where I work in
the Prosody and Speech Dynamics Lab. Prior to this, I received my PhD from UCLA.
My research program focuses on how prosodic/intonational structure, and its phonetic correlates, influence speech processing. My dissertation Prosodic prominence in vowel perception and spoken language processing, explores one particular domain where these questions are especially interesting.
I'm more generally interested in phonetics and laboratory phonology, and in paricular speech perception, prosody, and psycholinguistics. In another strand of research, I describe phonetic structure and patterns in language, mostly from acoustic data.
Check out the vita tab for a full list of publications and presentations. Scroll down for a list of recent activities, and some current projects. Don't hesitate to email me for pdfs of any posters or slides.
Recent and upcoming
- My paper "Prosodic prominence effects in the processing of spectral cues" has recently been published in Language, Cognition and Neuroscience [preprint] [link]
- Hironori Katsuda and I recently published a paper in Language and Speech! We show that perception of contrastive vowel length in Japanese is influenced by prosodic phrasing, where listeners integrate information about phrase-final lengthening in vowel duration perception, and use intonational cues to do so. Pre-publicaiton version of the manuscript [here] and link to the online article [here].
- Collaborators and myself presented the following at LSA 2021 - see the research tab for slides of each presentation.
- Prominence effects in vowel perception: Testing sonority expansion and hyperarticulation
- Prosodic phrasing is integrated in segmental speech perception: Evidence from the Korean Accentual Phrase - with Sahyang Kim, Taehong Cho and Sun-Ah Jun
- The role of segment and pitch accent in Japanese spoken word recognition - with Hironori Katsuda
- Mixed voice in Yemba voiced aspirates - with Matt Faytak
- Effects of aspiration and voicing on vowel acoustics in Yemba - with Jae Weller
Some current projects
- The role of different aspects of phonological and lexical organization in speech perception: disentangling and comparing the influence of biphone probability and neighborhood density in phonetic categorization, and online processing using eyetracking - in collaboration with Megha Sundara. See [here] for a manuscript.
- The influence of intonational structure and context on listeners' perception of durational cues, and processing of pitch accent in word recognition in Tokyo Japanese - in collaboration with Hironori Katsuda.
- The articulation and acoustics of voiced aspirated sounds in Yemba (Dschang), a language spoken in Cameroon - in collaboration with Matt Faytak and Rolain Tankou. Yemba has voiced and fully aspirated (not breathy-voiced) segments, for example [ndʰù] 'distant relative' (n.b. NOT [ndʱù]), cf. [ndù] 'river'.
We're using EGG to explore laryngeal articulation and timing patterns for voicing in these segments. We're also intersted in how voice quality for voiced+aspirated sounds differs from their unaspirated counterparts. To this end, were analyzing acoustic voice quality measures in tandem with EGG parameters. See [here] for a recent presentation.
I'm also working with Jae Weller to test how both aspiration and voicing in Yemba influence the articulation and acoustics of the following vowel.
- The influence of tonal cues to prosodic phrasing on the perception of segmental contrasts in Seoul Korean, as a function of domain initial strengthening - in collaboration with Sahyang Kim, Taehong Cho and Sun-Ah Jun.
- Phonetic structure in San Sebastián del Monte Mixtec, including the realization of so-called rearticulated vowels and glottalization - in collaboration with Iara Mantenuto and Félix Cortés.
- The role of pitch and durational cues to focus in the processing of contrastive pitch accents in English - in collaboration with Chie Nakamura and Sun-Ah Jun. Previous work has shown that listeners make predictions about upcoming information based on focus structure. For example, if I tell you "I wore a green hat, my friend wore a BLUE ..." you might expect the blue object will be a hat because the color term is in contrast. We're interested what phonetic properties inform this sort of predictive processing - more specifically - how do pitch cues and duration serve independently to shape listeners' predictions? How do they combine? We're exploring these questions with eyetracking.