Tracking Nonliteral Language Processing Using Audiovisual Scenarios

Document Type

Peer-Reviewed Article

Publication Date

2021

Abstract

Recognizing sarcasm and jocularity during face-to-face communication requires the integration of verbal, paralinguistic, and nonverbal cues, yet most previous research on nonliteral language processing has been carried out using written or static stimuli. In the current study, we examined the processing of dynamic literal and nonliteral intentions using eye tracking. Participants (N = 37) viewed short, ecologically valid video vignettes and were asked to identify the speakers’ intention. Participants had greater difficulty identifying jocular statements as insincere in comparison to sarcastic statements and spent significantly more time looking at faces during nonliteral versus literal social interactions. Finally, participants took longer to shift their attention from one talker to the other talker during interactions that conveyed literal positive intentions compared with jocular and literal negative intentions. These findings currently support the Standard Pragmatic Model and the Parallel-Constraint-Satisfaction Model of nonliteral language processing.

Comments

Online ahead of print 1 Apr 2021.

DOI

10.1037/cep0000223

PMID

33793260

Publication

Canadian Journal of Experimental Psychology

Publisher

Canadian Psychological Association


Share

COinS