Tracking Nonliteral Language Processing Using Audiovisual Scenarios
Document Type
Peer-Reviewed Article
Publication Date
2021
Abstract
Recognizing sarcasm and jocularity during face-to-face communication requires the integration of verbal, paralinguistic, and nonverbal cues, yet most previous research on nonliteral language processing has been carried out using written or static stimuli. In the current study, we examined the processing of dynamic literal and nonliteral intentions using eye tracking. Participants (N = 37) viewed short, ecologically valid video vignettes and were asked to identify the speakers’ intention. Participants had greater difficulty identifying jocular statements as insincere in comparison to sarcastic statements and spent significantly more time looking at faces during nonliteral versus literal social interactions. Finally, participants took longer to shift their attention from one talker to the other talker during interactions that conveyed literal positive intentions compared with jocular and literal negative intentions. These findings currently support the Standard Pragmatic Model and the Parallel-Constraint-Satisfaction Model of nonliteral language processing.
DOI
10.1037/cep0000223
PMID
33793260
Recommended Citation
Rothermich, K., Schoen Simmons, E., Rao Makarla, P., Benson, L., Plyler, E., Kim, H., & Henssel Joergensen, G. (2021). Tracking nonliteral language processing using audiovisual scenarios. Canadian Journal of Experimental Psychology/Revue Canadienne de Psychologie Expérimentale. Advance online publication. doi.org/10.1037/cep0000223
Publication
Canadian Journal of Experimental Psychology
Publisher
Canadian Psychological Association
Comments
Online ahead of print 1 Apr 2021.