• Search Menu
  • Sign in through your institution
  • Advance Articles
  • Editor's Choice
  • Braunwald's Corner
  • ESC Guidelines
  • EHJ Dialogues
  • Issue @ a Glance Podcasts
  • CardioPulse
  • Weekly Journal Scan
  • European Heart Journal Supplements
  • Year in Cardiovascular Medicine
  • Asia in EHJ
  • Most Cited Articles
  • ESC Content Collections
  • Author Guidelines
  • Submission Site
  • Why publish with EHJ?
  • Open Access Options
  • Submit from medRxiv or bioRxiv
  • Author Resources
  • Self-Archiving Policy
  • Read & Publish
  • Advertising and Corporate Services
  • Advertising
  • Reprints and ePrints
  • Sponsored Supplements
  • Journals Career Network
  • About European Heart Journal
  • Editorial Board
  • About the European Society of Cardiology
  • ESC Publications
  • War in Ukraine
  • ESC Membership
  • ESC Journals App
  • Developing Countries Initiative
  • Dispatch Dates
  • Terms and Conditions
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Introduction, the power of non-verbal communication, in academic settings, the role of body language in interviews and evaluations, cultural considerations, the impact of body language on collaboration, declarations.

  • < Previous

Unspoken science: exploring the significance of body language in science and academia

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Mansi Patil, Vishal Patil, Unisha Katre, Unspoken science: exploring the significance of body language in science and academia, European Heart Journal , Volume 45, Issue 4, 21 January 2024, Pages 250–252, https://doi.org/10.1093/eurheartj/ehad598

  • Permissions Icon Permissions

Scientific presentations serve as a platform for researchers to share their work and engage with their peers. Science and academia rely heavily on effective communication to share knowledge and foster collaboration. Science and academia are domains deeply rooted in the pursuit of knowledge and the exchange of ideas. While the focus is often on the content of research papers, lectures, and presentations, there is another form of communication that plays a significant role in these fields: body language. Non-verbal cues, such as facial expressions, gestures, posture, and eye contact, can convey a wealth of information, often subtly influencing interpersonal dynamics and the perception of scientific work. In this article, we will delve into the unspoken science of body language, exploring its significance in science and academia. It is essential to emphasize on the importance of body language in scientific and academic settings, highlighting its impact on presentations, interactions, interviews, and collaborations. Additionally, cultural considerations and the implications for cross-cultural communication are explored. By understanding the unspoken science of body language, researchers and academics can enhance their communication skills and promote a more inclusive and productive scientific community.

Communication is a multi-faceted process, and words are only one aspect of it. Research suggests that non-verbal communication constitutes a substantial portion of human interaction, often conveying information that words alone cannot. Body language has a direct impact on how people perceive and interpret scientific ideas and findings. 1 For example, a presenter who maintains confident eye contact, uses purposeful gestures, and exhibits an open posture is likely to be seen as more credible and persuasive compared with someone who fidgets, avoids eye contact, and displays closed-off body language ( Figure 1 ).

Types of non-verbal communications.2 Non-verbal communication comprises of haptics, gestures, proxemics, facial expressions, paralinguistics, body language, appearance, eye contact, and artefacts.

Types of non-verbal communications. 2 Non-verbal communication comprises of haptics, gestures, proxemics, facial expressions, paralinguistics, body language, appearance, eye contact, and artefacts.

In academia, body language plays a crucial role in various contexts. During lectures, professors who use engaging body language, such as animated gestures and expressive facial expressions, can captivate their students and enhance the learning experience. Similarly, students who exhibit attentive and respectful body language, such as maintaining eye contact and nodding, signal their interest and engagement in the subject matter. 3

Body language also influences interactions between colleagues and supervisors. For instance, in a laboratory setting, researchers who display confident and open body language are more likely to be perceived as competent and reliable by their peers. Conversely, individuals who exhibit closed-off or defensive body language may inadvertently create an environment that inhibits collaboration and knowledge sharing. The impact of haptics in research collaboration and networking lies in its potential to enhance interpersonal connections and convey emotions, thereby fostering a deeper sense of empathy and engagement among participants.

Interviews and evaluations are critical moments in academic and scientific careers. Body language can significantly impact the outcomes of these processes. Candidates who display confident body language, including good posture, firm handshakes, and appropriate gestures, are more likely to make positive impressions on interviewers or evaluators. Conversely, individuals who exhibit nervousness or closed-off body language may unwittingly convey a lack of confidence or competence, even if their qualifications are strong. Recognizing the power of body language in these situations allows individuals to present themselves more effectively and positively.

Non-verbal cues play a pivotal role during interviews and conferences, where researchers and academics showcase their work. When attending conferences or presenting research, scientists must be aware of their body language to effectively convey their expertise and credibility. Confident body language can inspire confidence in others, making it easier to establish professional connections, garner support for research projects, and secure collaborations.

Similarly, during job interviews, body language can significantly impact the outcome. The facial non-verbal elements of an interviewee in a job interview setting can have a great effect on their chances of being hired. The face as a whole, the eyes, and the mouth are features that are looked at and observed by the interviewer as they makes their judgements on the person’s effective work ability. The more an applicant genuinely smiles and has their eyes’ non-verbal message match their mouth’s non-verbal message, they will be more likely to get hired than those who do not. As proven, that first impression can be made in only milliseconds; thus, it is crucial for an applicant to pass that first test. It paints the road for the rest of the interview process. 4

While body language is a universal form of communication, it is important to recognize that its interpretation can vary across cultures. Different cultures have distinct norms and expectations regarding body language, and what may be seen as confident in one culture may be interpreted differently in another. 5 It is crucial for scientists and academics to be aware of these cultural nuances to foster effective cross-cultural communication and understanding. Awareness of cultural nuances is crucial in fostering effective cross-cultural communication and understanding. Scientists and academics engaged in international collaborations or interactions should familiarize themselves with cultural differences to avoid misunderstandings and promote respectful and inclusive communication.

Collaboration lies at the heart of scientific progress and academic success. Body language plays a significant role in building trust and establishing effective collaboration among researchers and academics. Open and inviting body language, along with active listening skills, can foster an environment where ideas can be freely exchanged, leading to innovative breakthroughs. In research collaboration and networking, proxemics can significantly affect the level of trust and rapport between researchers. Respecting each other’s personal space and maintaining appropriate distances during interactions can foster a more positive and productive working relationship, leading to better communication and idea exchange ( Figure 2 ). Furthermore, being aware of cultural variations in proxemics can help researchers navigate diverse networking contexts, promoting cross-cultural understanding and enabling more fruitful international collaborations.

Overcoming the barrier of communication. The following factors are important for overcoming the barriers in communication, namely, using culturally appropriate language, being observant, assuming positive intentions, avoiding being judgemental, identifying and controlling bias, slowing down responses, emphasizing relationships, seeking help from interpreters, being eager to learn and adapt, and being empathetic.

Overcoming the barrier of communication. The following factors are important for overcoming the barriers in communication, namely, using culturally appropriate language, being observant, assuming positive intentions, avoiding being judgemental, identifying and controlling bias, slowing down responses, emphasizing relationships, seeking help from interpreters, being eager to learn and adapt, and being empathetic.

On the other hand, negative body language, such as crossed arms, lack of eye contact, or dismissive gestures, can signal disinterest or disagreement, hindering collaboration and stifling the flow of ideas. Recognizing and addressing such non-verbal cues can help create a more inclusive and productive scientific community.

Effective communication is paramount in science and academia, where the exchange of ideas and knowledge fuels progress. While the scientific community often focuses on the power of words, it is crucial not to send across conflicting verbal and non-verbal cues. While much attention is given to verbal communication, the significance of non-verbal cues, specifically body language, cannot be overlooked. Body language encompasses facial expressions, gestures, posture, eye contact, and other non-verbal behaviours that convey information beyond words.

Disclosure of Interest

There are no conflicts of interests from all authors.

Baugh AD , Vanderbilt AA , Baugh RF . Communication training is inadequate: the role of deception, non-verbal communication, and cultural proficiency . Med Educ Online 2020 ; 25 : 1820228 . https://doi.org/10.1080/10872981.2020.1820228

Google Scholar

Aralia . 8 Nonverbal Tips for Public Speaking . Aralia Education Technology. https://www.aralia.com/helpful-information/nonverbal-tips-public-speaking/ (22 July 2023, date last accessed)

Danesi M . Nonverbal communication. In: Understanding Nonverbal Communication : Boomsburry Academic , 2022 ; 121 – 162 . https://doi.org/10.5040/9781350152670.ch-001

Google Preview

Cortez R , Marshall D , Yang C , Luong L . First impressions, cultural assimilation, and hireability in job interviews: examining body language and facial expressions’ impact on employer’s perceptions of applicants . Concordia J Commun Res 2017 ; 4 . https://doi.org/10.54416/dgjn3336

Pozzer-Ardenghi L . Nonverbal aspects of communication and interaction and their role in teaching and learning science. In: The World of Science Education . Netherlands : Brill , 2009 , 259 – 271 . https://doi.org/10.1163/9789087907471_019

Email alerts

Citing articles via, looking for your next opportunity, affiliations.

  • Online ISSN 1522-9645
  • Print ISSN 0195-668X
  • Copyright © 2024 European Society of Cardiology
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: a survey on deep multi-modal learning for body language recognition and generation.

Abstract: Body language (BL) refers to the non-verbal communication expressed through physical movements, gestures, facial expressions, and postures. It is a form of communication that conveys information, emotions, attitudes, and intentions without the use of spoken or written words. It plays a crucial role in interpersonal interactions and can complement or even override verbal communication. Deep multi-modal learning techniques have shown promise in understanding and analyzing these diverse aspects of BL. The survey emphasizes their applications to BL generation and recognition. Several common BLs are considered i.e., Sign Language (SL), Cued Speech (CS), Co-speech (CoS), and Talking Head (TH), and we have conducted an analysis and established the connections among these four BL for the first time. Their generation and recognition often involve multi-modal approaches. Benchmark datasets for BL research are well collected and organized, along with the evaluation of SOTA methods on these datasets. The survey highlights challenges such as limited labeled data, multi-modal learning, and the need for domain adaptation to generalize models to unseen speakers or languages. Future research directions are presented, including exploring self-supervised learning techniques, integrating contextual information from other modalities, and exploiting large-scale pre-trained multi-modal models. In summary, this survey paper provides a comprehensive understanding of deep multi-modal learning for various BL generations and recognitions for the first time. By analyzing advancements, challenges, and future directions, it serves as a valuable resource for researchers and practitioners in advancing this field. n addition, we maintain a continuously updated paper list for deep multi-modal learning for BL recognition and generation: this https URL .

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Comprehending Body Language and Mimics: An ERP and Neuroimaging Study on Italian Actors and Viewers

* E-mail: [email protected]

Affiliation Department of Psychology, University of Milano-Bicocca, Milan, Italy

Affiliations Department of Psychology, University of Milano-Bicocca, Milan, Italy, Department of Neuroscience, University of Parma, Parma, Italy

Affiliations Department of Psychology, University of Milano-Bicocca, Milan, Italy, Department of Cognitive Science, University of California San Diego, La Jolla, San Diego, California, United States of America

Affiliation Institute of Molecular Bioimaging and Physiology (IBFM), National Research Council (CNR), Milan, Italy

  • Alice Mado Proverbio, 
  • Marta Calbi, 
  • Mirella Manfredi, 
  • Alberto Zani

PLOS

  • Published: March 7, 2014
  • https://doi.org/10.1371/journal.pone.0091294
  • Reader Comments

Table 1

In this study, the neural mechanism subserving the ability to understand people’s emotional and mental states by observing their body language (facial expression, body posture and mimics) was investigated in healthy volunteers. ERPs were recorded in 30 Italian University students while they evaluated 280 pictures of highly ecological displays of emotional body language that were acted out by 8 male and female Italian actors. Pictures were briefly flashed and preceded by short verbal descriptions (e.g., “What a bore!”) that were incongruent half of the time (e.g., a picture of a very attentive and concentrated person shown after the previous example verbal description). ERP data and source reconstruction indicated that the first recognition of incongruent body language occurred 300 ms post-stimulus. swLORETA performed on the N400 identified the strongest generators of this effect in the right rectal gyrus (BA11) of the ventromedial orbitofrontal cortex, the bilateral uncus (limbic system) and the cingulate cortex, the cortical areas devoted to face and body processing (STS, FFA EBA) and the premotor cortex (BA6), which is involved in action understanding. These results indicate that face and body mimics undergo a prioritized processing that is mostly represented in the affective brain and is rapidly compared with verbal information. This process is likely able to regulate social interactions by providing on-line information about the sincerity and trustfulness of others.

Citation: Proverbio AM, Calbi M, Manfredi M, Zani A (2014) Comprehending Body Language and Mimics: An ERP and Neuroimaging Study on Italian Actors and Viewers. PLoS ONE 9(3): e91294. https://doi.org/10.1371/journal.pone.0091294

Editor: Alessio Avenanti, University of Bologna, Italy

Received: October 11, 2013; Accepted: February 11, 2014; Published: March 7, 2014

Copyright: © 2014 Proverbio et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: The authors gratefully acknowledge financial support from University of Milano-Bicocca (2011FAR funds). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Social interactions are based on the transmission of both verbal and non-verbal information, which are automatically processed in parallel. Evidence has been provided that suggests that we are more impressed by the implicit (non-verbal) than the explicit information we receive. Indeed, in contrast to people’s verbal statements, people’s intentions and beliefs can be inferred from how they move their bodies or modulate their facial mimicry [1] – [2] .

Indeed, kinematic studies have identified what cues observers rely on for detection of social intentions (e.g., [3] ). Notably, the faster we can judge other’s intentions, the more time we have to select a suitable response [4] .

It is well understood that non-verbal behavior and “emotional body language” (EBL) have crucial roles in communication and guiding social interactions [5] , however not much is known about the neural underpinnings of this complex ability, especially compared to the large numbers of neuroscientific investigations of explicit linguistic communication that have been carried out on explicit linguistic communication.

It is known that visual processing of the human body and its emotional displays (that are based on motion and mimicry) activates brain regions that are normally involved in the processing of face and body structural properties [6] – [10] such as the face fusiform area (FFA) [11] , the extra-striate body area (EBA) [12] , which is located at the posterior inferior temporal sulcus/middle temporal gyrus, and the fusiform body area (FBA) [13] , which is found ventrally in the fusiform gyrus; all of these areas normally operate in concert with the amygdala and the superior temporal sulcus (STS).

Peelen and coworkers [14] measured the degrees of activation of the EBA and FBA in response to “emotional” and neutral body language. The authors presented short movie clips of people expressing 5 basic emotions (anger, disgust, fear, happiness, and sadness) or performing emotionally neutral gestures. The results showed that the functionally localized EBA and FBA were influenced by the emotional significance of body movements. Furthermore, using multi-voxel pattern analysis, these authors showed that the activities of these two regions were not only greater in response to emotional versus neutral bodies but also that such emotion-related increases correlated positively with the degree of body selectivity across voxels. Similarly, De Gelder and coworkers [15] contrasted brain activations during the perception of frightened or happy EBL. Affective images and images of neutral body movements were alternately displayed, and the faces of the actors were obscured. The results revealed increased BOLD signals in areas responsible for the processing of emotions, including the amygdala and the orbitofrontal cortex (OFC), and in motor areas, such as the premotor cortex.

Amoruso and coworkers [16] recently proposed an integrated functional neuroanatomic model of EBL and action meaning comprehension in which the EBA and FBA provide perceptual information about people and their interactions that is integrated into a larger fronto-insular–temporal network. More specifically, this network includes the following components: several frontal areas that update and associate ongoing contextual information in relation to episodic memory, the STG, the parahippocampal cortex (PHC), the hippocampus, and the amygdala, which indexes the value of learning target-context associations (affective information). Additionally, in this proposed model, the insular cortex coordinates internal and external milieus with an inner motivational state. An interesting functional magnetic resonance imaging study has provided direct evidence that the EBA is not only highly responsive when subjects observe isolated faces presented in emotional scenes but also highly responsive to threatening scenes in which no body is present [17] ; these findings suggest that the role of the EBA in EBL comprehension extends beyond the processing of body structures.

Despite the incredible complexity of the non-facial mimicry and gestures that humans (especially Mediterranean people such as Italians) use to communicate their emotional and mental states, neuroimaging investigations (described above) have thus far dealt solely with basic affective emotions (e.g., anger, happiness, fearfulness, and disgust) and have primarily been based on facial expressions or a limited set of stereotyped symbolic gestures (e.g., indicating “victory” with 2 fingers [18] ) or stick figure characters [19] that are not ecologically relevant.

To address this issue, we created a large set of highly ecological and complex body language patterns by taking pictures of real Italian actors impersonating emotional states in front of a camera according to the Stanislavski method. This method is based not only on character’s psychological analysis, but also on a personal research between character’s interior world and the actor’s one. It concerns the expression of interior emotions through their interpretation to enable actors to draw believable emotions to their performances [20] .

All actions and gestures used in this study reflected the actors’ emotional (or physiological) states, rather than a neutral semantic meaning (e.g.: “drinking”, “driving”, “smoking”, etc.). Therefore they represented people emotional body language (EBL). To measure the neural processing associated with EBL comprehension, the neural processing of body language patterns preceded by congruent descriptions of the feeling displayed (e.g., “Come here, let me hug you!” followed by a picture of a person with a big smile and open arms) or incongruent description (e.g., “I hate you!”) were compared. We hypothesized that presenting a verbal description of an emotional or physiological state would activate the conceptual representation of corresponding body language (because of resonating empathic systems), and that the presentation of a picture representing a person actually experiencing the same or totally different feeling would stimulate a congruent (“same”) vs. incongruent (“different”) neural response. Electric neuroimaging literature have identified such a response as a negative deflection peaking at about 400 ms (but generally more anterior than linguistic N400) indexing the automatic detection of an incongruence between incoming visual information about an action being performed, and previous knowledge (about the action’s goal, intention, appropriateness, procedure, context of use, etc.): [16] , [21] – [25] .

In this study, ERPs were recorded in response to nearly 300 pictures of male and female actors displaying clearly recognizable EBL (as previously validated by a group of judges) in the 2 conditions. Pictures were carefully matched across categories for perceptual and sensory characteristics (such as size, luminance, color, body characteristics, body position, body orientation, clothes, body region involved in the mimicry, etc.). We therefore assumed that any differences in the ERP response amplitudes (especially the N400) at any site or latency could be interpreted as bioelectric indexes of the neural activity linked to the recognition or the detection discrepancies between prior verbal descriptions of an affect and the recognition of an affect expressed by the perceived body language. Source reconstruction was applied to the surface potentials to identify the neural generators responsive to incongruence; thus, spatial resolution was added to the optimal millisecond resolution provided by this electrophysiological technique.

Participants

Thirty healthy right-handed Italian University students (15 males and 15 females) were recruited for this experiment. Their ages ranged from 18 to 29 years (mean = 23 years; men = 24.27 SD = 2.37; women = 21.73 SD = 2.43). All had normal or corrected to normal vision and reported no history of neurological illness or drug abuse. Their handedness was assessed by the Italian version of the Edinburgh Handedness Inventory, which is a laterality preference questionnaire that reported strong right-handedness and right ocular dominance in all participants. Data from all participants were included in all analyses. Experiments were conducted with the understanding and written consent of each participant according to the Declaration of Helsinki (BMJ 1991; 302∶1194) with approval from the Ethical Committee of the Italian National Research Council (CNR) and in compliance with APA ethical standards for the treatment of human volunteers (1992, American Psychological Association).

Stimuli and Materials

Stimulus validation..

Stimulus materials were generated by taking ecological pictures of emotional body postures. Eight semi-professional actors (4 males and 4 females) were asked to display particular moods or emotional states using their entire body. The individual in this manuscript has given written informed consent (as outlined in PLOS consent form) to publish these case details. Photographs were taken in a classroom while the actors stood in front of the camera in a black hall in light-controlled conditions. A set of standardized instructions was given to each actor indicating that they should spontaneously express 40 typical emotional/mental states (listed in Table 1 ). The expressions of these emotional/mental states did not include symbolic or language-based gestures. For each of the 40 body-language categories, 8 pictures were taken, which resulted in a total of 320 pictures. Half of these pictures were assigned to the congruent condition, and the other half were assigned to the incongruent condition. In the congruent condition, the pictures were congruent with verbal descriptions that summarized the body language and immediately preceded the display of the pictures; in the incongruent condition, the pictures were incongruent with the verbal descriptions that immediately preceded them. Example verbal descriptions are provided in Table 1 . The complexity of verbal description and emotional connotation of body-language categories was balanced across the congruent and incongruent classes, as shown in Table 2 .

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0091294.t001

thumbnail

https://doi.org/10.1371/journal.pone.0091294.t002

To test the validity of the pictures (i.e., to ensure that they were easily comprehensible in terms of their intended meanings), they were presented to a group of 12 judges (8 women, 4 men) with a mean age of 29.9 years. These judges were asked to judge the coherence between the EBL of the pictures and the verbal labels associated with them. Specifically, the judges were asked, “How likely is it that the person pictures would actually think or say something like that?” The judges responded by pressing a button to signal “Yes, it’s likely” (congruent) or another button to signal “No, it’s not likely” (incongruent).

All pictures were randomly ordered one per page in a PowerPoint file with their associated verbal descriptions and presented to the 12 judges. The experimenter showed the judges the pictures one by one for a few seconds each and asked them to rapidly evaluate the congruency as described above. Only pictures that were evaluated consistently by at least 75% of the judges were included in the experimental set; the other pictures were rejected or the corresponding verbal descriptions were changed.

Final stimuli for ERP experiment.

At the end of this process, we selected 280 pictures (half were congruent, and half were incongruent). Figures 1 and 2 show example stimuli for the various emotional states. In visual angle, the stimuli were 6° in length and 8° in height. The stimuli were equiluminant: an ANOVA revealed no difference in picture luminance across the categories (congruent = 9.33 cd/cm2; incongruent = 8.93 cd/cm2). The verbal descriptions were presented in Arial Narrow font and were written in white on a black background. The lengths of these descriptions ranged from 3 to 11 cm, which subtended visual angles of 1° 30′ to 5° 30′ on the horizontal axis. The heights of the descriptions ranged from 1 to 4 cm, which subtended visual angles of 30′ to 2° on the vertical axis. Each verbal description was presented in short lines (1 to 3 words per line) for 700 ms at the center of the PC screen with inter-stimulus intervals (ISIs) that ranged from 100 to 200 ms and were followed by the corresponding picture, which was presented for 1200 ms with an ISI of 1500 ms. The outer background was black.

thumbnail

https://doi.org/10.1371/journal.pone.0091294.g001

thumbnail

https://doi.org/10.1371/journal.pone.0091294.g002

The task consisted of responding as accurately and quickly as possible to the pictures judged to be congruent by pressing a response key with the index finger (of the left or right hand) and to the pictures judged to be incongruent by pressing a response key with the middle finger (of the left or right hand). The hand used was alternated during the recording session (to avoid possible biases due to the prolonged activation of the contralateral hemisphere). Hand orders and the task conditions were counterbalanced across subjects. At the beginning of each session, the subjects were told which hand would be used to indicate their responses.

The participants were seated comfortably in a darkened, acoustically and electrically shielded test area. They faced a high-resolution VGA computer screen located 114 cm from their eyes and were instructed to gaze at the center of the screen, where a small blue circle served as a fixation point, and to avoid any eye or body movements during the recording session. Stimuli were presented in a random order at the center of the screen in 8 blocks of 33–38 trials that lasted about 3 minutes each. Each block was preceded by a warning signal (a red cross) that was presented for 700 ms. The experimental session was preceded by a training session that included two runs, one for each hand. The sequence presentation order varied across subjects. The experiment lasted about 1 hour and a half (pauses included).

EEG Recording and Analysis

The EEG was continuously recorded from 128 scalp sites at a sampling rate of 512 Hz. Horizontal and vertical eye movements were also recorded. Linked mastoids served as the reference lead. The EEG and electro-oculogram (EOG) were amplified with a half-amplitude band pass of 0.016–70 Hz. Electrode impedance was maintained below 5 kΩ. The EEG was recorded and analyzed using EEProbe recording software ( ANT Software , Enschede, The Netherlands). Stimuli presentation and triggering was performed using Eevoke Software for audiovisual presentation ( ANT Software , Enschede, The Netherlands).

EEG epochs were synchronized with the onset of stimuli presentation. A computerized artifact rejection criterion was applied before averaging to discard epochs in which eye movements, blinks, excessive muscle potentials or amplifier blocking occurred. The artifact rejection criterion was a peak-to-peak amplitude exceeding 50 µv, and the rejection rate was ∼5%. ERPs were averaged off-line from −100 ms before to 1200 ms after stimulus onset. ERP components were identified and measured with reference to the average baseline voltage over the interval of −100 to 0 ms at the sites and latencies at which they reached their maximum amplitudes. The choice of electrode sites and time windows for measuring and quantifying ERP components of interest was based both on previous literature and on the determination of when and where (on scalp surface) they reached their maximum values.

The mean amplitude (at peak) and latency of the posterior P300 response was measured at centroparietal (CP1, CP2) and occipitotemporal (P9, P10, PPO1, POO2) sites between 280 and 440 ms. The anterior N400 mean area amplitude was quantified at dorsolateral (F1, F2) and inferior (F5, F6) frontal sites in the 380–460 ms time window. The mean area amplitude of the centro-parietal N400 response was measured at the P1, P2, CPP1h, and CPP2h sites between 400 and 600 ms. The amplitude of the late positivity (LP) was measured over the occipitotemporal P9, P10, PPO1, PPO2 sites in the 650–850 ms time window.

ERP data were subjected to multifactorial repeated-measures ANOVAs with three within group factors: Condition (Congruent, Incongruent), Electrode (dependent upon the ERP component of interest) and Hemisphere (left, right). Multiple comparisons of the means were performed with Tukey’s post-hoc tests.

Topographical voltage maps of the ERPs were made by plotting color-coded isopotentials obtained by interpolating voltage values between scalp electrodes at specific latencies. Low-resolution electromagnetic tomography (LORETA; Pasqual-Marqui and coworkers [26] ) was performed on the ERP waveforms from the anterior N400 (380–460 ms) using ASA4 Software (ANT Software, Enschede, The Netherlands).

Source reconstruction was performed on surface potentials recorded in the latency range of anterior N400, because it represented the first ERP modulation related to action content, and based on previous literature showing a modulation of the anterior N400 indexing the detection/discrimination of incongruent vs. congruent actions [18] , [21] , [22] , [27] , [28] . LORETA is a discrete linear solution to the inverse EEG problem, and it corresponds to the 3D distribution of neuronal electric activity that maximizes similarity (i.e., maximizes synchronization) in terms of orientation and strength between neighboring neuronal populations (represented by adjacent voxels). In this study, an improved version of standardized weighted low-resolution brain electromagnetic tomography (sLORETA) was used; this version incorporates a singular value decomposition-based lead field weighting (i.e., swLORETA; Palmero-Soler and coworkers [29] . The source space properties included a grid spacing (the distance between two calculation points) of 5 points and an estimated signal-to-noise ratio, which defines the regularization, of 3 (higher values indicating less regularization and therefore less blurred results). SwLORETA was performed on the group data and identified statistically significant electromagnetic dipoles (p<0.05); increases in the magnitudes of these dipoles correlated with more significant activation. The strength of a locus of activation is represented by the magnitude (magn.) of the electromagnetic signal (in nA m −1 ). The electromagnetic dipoles are shown as arrows and indicate the position, orientation and magnitude of dipole modeling solutions applied to the ERP waveform in the specific time window. The larger the magnitude, the more significantly a source was found to explain/contribute to the surface potential.

A realistic boundary element model (BEM) was derived from a T1-weighted 3D MRI data set by segmenting the brain tissue. This BEM model consisted of one homogenous compartment comprised of 3,446 vertices and 6,888 triangles. The head model was used for intracranial localization of surface potentials. Both segmentation and generation of the head model were performed using ASA software.

Reaction times (RTs) that exceeded the mean value ±2 standard deviations were discarded, which resulted in a rejection rate of 2%. Error rate percentages were converted to arcsin values. Both RTs and error percentages were subjected to separate multifactorial repeated-measures ANOVAs with 1 between-subject factor (gender: male or female) and 2 within-subject factors (condition: congruent or incongruent; and response hand: left or right).

Behavioral Results

Analysis of the reaction times (RTs) revealed a main effect of response hand (F1, 28 = 9.1, p<0.0055) that was due to the responses of the right hand (828 ms, SE = 22) being faster than those of the non-dominant hand (851 ms, SE = 21). Neither gender nor stimulus congruence significantly affected RTs. The accuracy data indicated that fewer errors were committed in response to incongruent pictures (7.7%, SE = 1.5. Raw value = 2%) than in response to congruent pictures (20.9%, SE = 1.7. Raw value = 12%), and the corresponding main effect of congruence was significant (F1, 28 = 41.8, p<0.0055). No other factors significantly affected accuracy.

Electrophysiological Data

Figure 3 shows grand-average ERP waveforms recorded at various anterior and posterior sites as a function of the congruence of the actions and verbal description. A strong posterior modulation of the synchronized response that indicates the early recognition of expected gestures (as early as 280 ms and indexed by the P300 component) is visible. This modulation was followed by a centro/parietal N400 that was elicited by incongruent gestures (400–600 ms) and by a larger late positivity (LP) that was elicited by congruent gestures (650–850 ms). At the frontal sites, incongruent EBL was recognized as such as early as 380 ms (380–460 ms) as indexed by the large inferior frontal N400 response.

thumbnail

https://doi.org/10.1371/journal.pone.0091294.g003

Posterior Sites

P300 (280–440 ms)..

The ANOVA performed on the peak amplitude values of the P300 revealed a significant effect of Condition (F(1,29) = 17.563; p<0.00024) that was driven by a stronger response to congruent (4.41 µv, SE = 0.43) than incongruent (3.82 µv, SE = 0.41) EBL. The significant effect of Electrode (F(2,58) = 17.627; p<0.000001) was driven by the presence of a larger P300 at lateral occipital (5.60 µv, SE = 0.59) than centro-parietal (3.22 µv, SE = 0.50) and occipitotemporal (3.51 µv, SE = 0.32) sites. The ANOVA also revealed a significant Condition × Electrode interaction (F(2,58) = 5.096; p<0.00915) that was driven by larger P300 responses to congruent images at posterior sites (5.96 µv, SE = 0.61; post-hoc comparisons: p<0.00052). The significant Condition × Hemisphere interaction (F(1,29) = 6.628; p<0.0155) was driven by much larger responses to congruent body expressions over the right (4.50 µv, SE = 0.42) than the left (4.32 µv, SE = 0.46) hemisphere (post-hoc tests: p<0.0123).

Latency analyses indicated that P300 occurred earlier for incongruent (340 ms, SE = 0.005) than congruent stimuli (359 ms, SE = 0.005) as indicated by the significant main effect of Condition (F(1,29) = 18.770; p<0.00017). This result was most likely related to differences in P300 amplitude since large and slow components (such as P300) typically reach their maximum amplitude later in time: the smaller, the earlier.

N400 (400–600 ms).

The ANOVA performed on the mean amplitudes of the posterior N400 revealed a significant effect of Condition (F(1,29) = 33,86; p<0.000004) that was driven by greater responses to incongruent (1.97 µv; SE = 0.33) than congruent EBL (3.71 µv; SE = 0.39). The main effect of electrode was also significant (F(2,58) = 10.34; p<0.00015) and was driven by larger N400s at occipitotemporal sites than parietal sites. However, the Condition × Electrode interaction (F(2,58) = 22.63; p<0.000001) was driven by the lack of effect of stimulus congruence over the visual (occipitotemporal) areas and the significantly increased N400 in response to incongruent pictures at central and centroparietal sites. The Hemisphere × Condition (F(1,29) = 4.48; p<0.05) and Hemisphere × Electrode × Condition interactions (F(2,58) = 10.964.48; p<0.00001) and the relevant post-hoc comparisons revealed a hemispheric asymmetry in the congruence effect; the effect was lacking at the left occipitotemporal sites but was significant over the right visual areas (P9: CONG 1.79 µv, SE = 0.29; INCONG 1. 56 µv; SE = 0.28. P10; CONG 2.43 µv, SE = 0.36; INCONG 1. 53 µv; SE = 0.34). The modulation of N400 in response to incongruent stimuli was significant at all sites (except P9) bilaterally, and this effect was larger at parietal sites (also visible in Fig. 3 ).

Late positivity (650–850 ms).

The LP was significantly affected by Condition (F(1,29) = 5.503; p<0.02604) and was of greater amplitude in response to congruent (3.63 µv, SE = 0.38) than incongruent (3.14 µv, SE = 0.41) EBL. The significant effect Electrode (F(1,29) = 86.731; p<0.000001) was driven by LPs at PPO1 and PPO2 (4.58 µv, SE = 0.47) than at P9 and P10 (2.19 µv, SE = 0.32). This result was confirmed by the significant Condition × Electrode interaction (F(1,29) = 8,410; p<0.00706), which was driven by greater responses at lateral occipital sites (PPO1, PPO2: CONG = 4.95 µv, SE = 0.48; INCONG = 4.21 µv, SE = 0.50) than at occipitotemporal (P9, P10: CONG = 2.31 µv, SE = 0.31; INCONG = 2.07 µv, SE = 0.35) sites (post-hoc comparisons: p<0.00017). The ANOVA also revealed a significance Condition × Hemisphere interaction (F(1,29) = 4.365; p<0.04556) that was driven by larger LP responses to congruent images over the right hemisphere (3.86 µv, SE = 0.46) compared to the left hemisphere (3.40 µv, SE = 0.35) (post-hoc comparison of the means: p<0.00325). The significant Electrode × Hemisphere interaction (F(1,29) = 8.290; p<0.00742) indicated larger LP amplitudes over lateral occipital (PPO1: = 4.69 µv, SE = 0.47; PPO2 = 4.47 µv, SE = 0.49) than occipitotemporal sites (P9 = 1.72 µv, SE = 0.25; P10 = 2.66 µv, SE = 0.47) and greater LP modulation over the right than over left hemisphere (post-hoc tests: p<0.01233).

The Condition × Electrode × Hemisphere (F(1,29) = 8.158; p<0.00785) interaction was driven by greater LP responses to Congruent (P9 = 1.74 µv, SE = 0.27; P10 = 2.89 µv, SE = 0.46; PPO1 = 5.06 µv SE = 0.48; PPO2 = 4.83 µv, SE = 0.50) than Incongruent (P9 = 1.71 µv, SE = 0.27; P10 = 2.42 µv, SE = 0.49; PPO1 = 4.31 µv, SE = 0.50; PPO2 = 4.12 µv, SE = 0.52) images particularly over the right lateral occipital sites (post-hoc comparisons: p<0.00028).

Anterior Sites

N400 (380–460 ms)..

The anterior N400 showed an effect of Condition (F(1,29) = 36.754; p<0.000001) that was driven by greater N400 responses to incongruent (−1.42 µv, SE = 0.49) than congruent images (−0.10 µv, SE = 0.47). The significant effect of Electrode (F(1,29) = 5.719; p<0.0235) was driven by greater N400 amplitudes at inferior frontal (−1.03 µv, SE = 0.43) sites than at frontal (−0.49 µv, SE = 0.52) sites (see also waveforms of Fig. 3 ). The ANOVA also revealed a significance effect of Hemisphere (F(1,29) = 31.755; p<0.00001) that was due to the larger N400 response over the left hemisphere (−1.38 µv, SE = 0.49) than over the right hemisphere (−0.14 µv, SE = 0.47). This result was confirmed by the significant Electrode × Hemisphere interaction (F(1,29) = 11,981; p<0.0017), which was due to the larger N400 response over the left (F1 = −0.87 µv, SE = 0.55; F5 = −1.89 µv, SE = 0.47) than over the right (F2 = −0.12 µv, SE = 0.53; F6 = −0.16 µv, SE = 0.44) hemispheric sites (post-hoc tests: p<0.00735). The significant Condition × Electrode interaction (F(1,29) = 13,499; p<0.00097) was due to the greater N400 modulatory effect of action incongruence at inferior frontal sites (INCONG = −1.58 µv, SE = 0.46; CONG = −0.47, SE = 0.43) compared to more dorsolateral frontal sites (INCONG = −1.26 µv, SE = 0.54; CONG = 0.26 µv, SE = 0.53). Both the topographic distribution and the left hemisphere asymmetry are clearly visible in the topographical maps in Fig. 4 .

thumbnail

https://doi.org/10.1371/journal.pone.0091294.g004

To locate the possible neural source of the N400 response, different swLORETA source reconstructions were performed on the brain voltages recorded in the Congruent and Incongruent conditions and the difference waves obtained by subtracting the ERPs elicited by the Congruent EBL from those elicited by the Incongruent EBL in the 380–460 ms time window. We assumed that while the processing of congruent EBL reflected the activation of the complex circuit for action, theory of mind, body and face analysis, body language processing and reading, etc., the processing of incongruent EBL specifically (and additionally) activated the regions more involved in the representation of supposed emotional state of others, besides regions representing a discrepancy in conceptual representation.

Table 3 shows the electromagnetic dipoles that significantly explained the surface voltages recorded in response to congruent (Top) and incongruent (Bottom) affective body language. A series of activations were common to the two conditions (clearly visible in Figure 5 ) and included the right (BA20) and left (BA37) fusiform gyri, the right parahippocampal gyrus (BA35), and the right supramarginal gyrus (BA40). The main differences between the congruent and incongruent conditions were the following: the activation of the right STG (BA38) elicited by congruent EBL (12.85 nAm) was stronger than that elicited by incongruent (11.73 nAm) EBL; the left postcentral gyrus of the parietal cortex was uniquely activated by congruent EBL; and the left premotor cortex was uniquely activated by incongruent EBL (BA6). To better appreciate the difference between the 2 conditions (since, naturally, the strongest signals came from face and body processing-devoted brain areas, commonly activated by congruent and incongruent EBL), a further swLORETA was applied to the grand-average difference-wave obtained by subtracting the ERPs elicited by congruent EBL from those elicited by incongruent EBL. Table 4 contains a list of significant sources, and the LORETA solution is visible in Figure 6 . The processing of incongruent body language was associated with significant activities in the bilateral limbic (BA28, 38) and ventromedial orbitofrontal regions (BA11), and regions that are normally activated by human faces and bodies (BA 20, 21, 37).

thumbnail

The inverse solution was applied to the grand average signals (N = 30). The different colors represent differences in the magnitudes of the electromagnetic signals (in nAm). The electromagnetic dipoles are shown as arrows and indicate the position, orientation and magnitude of the dipole modeling solutions that were applied to the ERP waveforms in the specific time windows. The numbers refer to the displayed brain slice in the axial view: L = left, R = right.

https://doi.org/10.1371/journal.pone.0091294.g005

thumbnail

A = anterior, P = posterior.

https://doi.org/10.1371/journal.pone.0091294.g006

thumbnail

https://doi.org/10.1371/journal.pone.0091294.t003

thumbnail

https://doi.org/10.1371/journal.pone.0091294.t004

The purpose of this study was to investigate the neural mechanisms underlying the human ability to understand emotional body language (EBL). To accomplish this goal, whole-figure photographs of 8 female and male actors portraying 40 typical emotional or mental states (e.g., “I am in love”, “I admire you so much!”, “I hate you” etc.) were taken. During the EEG recording sessions, each of 280 pictures was presented and preceded by a short verbal description of a feeling; this feeling was strongly incongruent with the content of the picture in half of the presentations. Behavioral and ERP data elicited by congruent and incongruent EBL displays were compared. To exclude the possibility that differences emerged due to discrepancies in purely sensory characteristics, all photographs were taken in the same conditions and were equiluminant, identical in size, and similar in many perceptual characteristics (e.g., each actor was present in the same number of congruent and incongruent trials).

Due to this careful balancing of perceptual factors, the electrophysiological signals showed no differences in the first 250 ms of visual processing (i.e., the P1 and N170 components) between the 2 classes of stimuli; this lack of difference is clearly illustrated in the ERP waveforms in figure 3 that were recorded at the occipitotemporal and lateral occipital sites. The lack of effects in the early P1 and N1 components demonstrates that the only difference between the two classes of photographs was their congruence with the preceding verbal definitions.

The earliest recognition of body language was indexed by the centroparietal P300 component, which was larger in response to congruent behavior in the time window between 280 and 440 ms. This congruence effect was more evident over the right visual area (i.e., PPO1 and PPO2), which most likely reflects the recognition activity (or priming effect) of cortical body- and face-devoted areas.

Previous studies of congruent/incongruent actions (e.g., [23] , [24] , [28] ) have not reported posterior P3 and LP responses. This discrepancy is most likely due to methodological differences. In the present study, categorization based on action congruency was required of the participants, which generated P300-like responses to the congruent items; the tasks used in the aforementioned previous studies were implicit and involved secondary tasks that were not based on action categorization. Presumably, in these studies, action incongruence was automatically detected by the action-observation system, which generated anterior N400 responses to incongruent items, and no response-related P3 was generated. Indeed, Shibata and coworkers [27] observed large P300 responses to congruent actions when they asked their participants to evaluate the appropriateness of cooperative actions between two people.

Regarding the present investigation, the earliest increase in ERP amplitude in response to incongruent body language was observed at frontal, particularly inferior frontal, sites (F1, F2) in the time window of 380 to 460 ms and occurred in the form of a N400 deflection. The N400 component typically represents a supramodal index of conceptual processing and reflects difficulty in integrating incoming information with previously acquired information (in this case, verbal descriptions of emotional or mental states).

Previous ERP literature has revealed which neural circuits are involved in the recognition of purposeful versus purposeless behavior. It is thought that the activities of these circuits are reflected on the modulation of the anterior N400 response [18] , [21] – [23] , [27] . For example, Proverbio & Riva [22] provided evidence of that incongruent actions (e.g., a surgeon dissecting a book) elicit larger anterior negativities (i.e., N400) than do congruent actions (e.g., a woman doing the laundry), especially at inferior frontal sites (F1, F2). Indeed, the N400 response is not only sensitive to semantic and conceptual linguistic information but is also sensitive to violations of world-knowledge and communicative gestures [30] . Deaf native signers are especially sensitive to semantic violations and produce larger N400 responses than non-deaf controls [31] . Interestingly, Proverbio and coworkers [24] found that perceptions of incorrect basketball scenes elicited enlarged N400 responses at anterior sites in the 450–530 ms time window in skilled brains (i.e., professional basketball players). This deflection was totally absent in people who were unfamiliar with basketball. The modulation of the anterior N400 (which was larger at lateral anterior frontal sites; i.e., AF7 and AF8) was interpreted to reflect difficulty integrating incoming visual information with related sensorimotor knowledge. In this study [24] , only professional basketball players detected violations in the system of basketball rules (i.e., violations of body postures, gestures, actions, or positions). A swLORETA inverse solution applied to the difference waves recorded in response to incorrect actions minus correct actions revealed that the strongest foci of activation were in the right temporal cortex, the inferior and superior temporal gyri (STG BA38), the right fusiform gyrus and the lingual gyrus (BA18). The lateral occipital area, also called the extrastriate body area (EBA) [32] , is part of both the perception and action systems. Additionally, the superior temporal sulcus (STS) contains neurons that respond to the observation of biological actions such as grasping, looking or walking. In addition to visual areas, the perception of incorrect actions stimulated the right inferior parietal lobule (BA39/40), the precentral and premotor cortices (BA6), and the cerebellum of basketball players. The inferior parietal lobule has been shown to code transitive motor acts and meaningful behavioral routines (e.g., brushing teeth or flipping a coin). Indeed, lesions of the inferior parietal lobule are associated with impairments in the ability to recognize or perform skilled actions (such as lighting a cigarette or making coffee), and this deficit is called apraxia. In both groups, pictures of players in action strongly activated the right fusiform gyrus (BA37), a region that may include both the fusiform face area (FFA) [33] and the fusiform body area [34] , which are regions that are selectively activated by human faces and bodies, respectively.

Moreover, in the present study, the analysis of the inverse swLORETA solution applied to the brain responses elicited by congruent and incongruent affective body language yielded a series of common activations that included the fusiform and the medial temporal gyri, which reflect the involvement of the activities of areas dedicated to the analyses of faces and bodies, such as the FFA, the fusiform body area (FBA) and the EBA. Additionally, we also found common activation of the parahippocampal gyrus, and this finding agrees with a similar finding of Proverbio and coworkers’ [24] aforementioned study on basketball players. Indeed, the parahippocampal gyrus might be involved in the visuospatial processing of places and analysis of spatial positions and orientations of body parts with respect to the space and environment [35] , [36] .

In the present study, the activation of the left superior frontal gyrus was associated with the processing of incongruent body language, while the same area was bilaterally activated during the perception of congruent body language. In congruent EBL conditions, activities in regions that are part of the fronto-parietal system have also been detected [37] ; these regions include the left postcentral gyrus (BA3) and the right supramarginal gyrus (BA40) (the latter is also involved in coding incongruent body language). In contrast, the source in the left precentral gyrus (BA6) was found to be active only in response to incongruent EBL. This region is thought to play a crucial role in representing the goals of actions and the intentions of agents and has also been found to be active in previous studies of action recognition [23] , [24] , [28] .

The swLORETA applied to the difference between congruent and incongruent EBL in the N400 time window revealed significant bilateral activity in the uncus, the anterior portion of the parahippocampal gyrus (BA28, BA38), and the right posterior cingulate cortex (BA 23); these regions belong to the limbic circuit involved, which is involved in emotional processing. These localizations agree with a large body of literature that indicates the primary involvements of the prefrontal and orbitofrontal cortices, the hippocampus [38] and the cingulate cortex [23] in emotional processing and the subjective evaluation of events and their significance [39] – [41] . In a recent study by Proverbio and coworkers [28] the processing of social cooperative and affective interactions were contrasted, which revealed a strong activation of the limbic system, especially the right posterior cingulate cortex, in response to purely affective interactions in the time window between 150 and 190 ms (corresponding to the N170 ERP response). Additionally, the involvement of the posterior cingulate cortex (BA23) in the recognition of appropriate (vs. inappropriate) actions has been reported by Proverbio and co-workers [23] , especially in the brain of women, displaying a more emotional than rational reaction to action incongruence. Therefore, it seems that the cingulate cortex (along with other cortical regions including the inferior parietal area) is heavily involved in the mechanisms of empathy and promotes connections between the mirror system and the ability to infer the emotions and mental states of others [42] , [43] .

In our opinion, one of the most important results of the present study is that the strongest source of activity of the incongruent/congruent difference was located in the right rectal gyrus (BA11) of the ventromedial orbitofrontal cortex, which is located at the base of the frontal lobe and rests on the upper wall of the orbital cavity. This region is involved in the processing of social and emotional decisions and appears to be important for developing, evaluating and filtering emotional information. A region with these characteristics would be crucial for the recognition and processing of affective action content but not the goals of actions. Notably, our previous experiments investigating the comprehension of non-affective goal directed behavior did not implicate this region [22] – [24] , which suggests that the specific role of this area is related to the processing of affective cues conveyed by body language.

The early anterior N400 was partly paralleled and followed by a centroparietal N400 that peaked between 400 and 600 ms in response to incongruent EBL and by a posterior LP over right visual areas that was larger in response to congruent EBL. The topographic distribution of the N400 was similar to that of typical central-parietal N400 responses that have been reported in verbal [44] and nonverbal language studies [18] . Consistent with our study, Gunter and Bach [18] observed a frontal N300 that was followed by a centro/parietal N400 response, and the latter response was larger following incongruent gestures. The centroparietal N400 is a supramodal multisensory component that is thought to reflect difficulty in integrating incoming inputs with previous information at a conceptual level that is independent of sensory modality. Classically, the N400 has been elicited by semantically anomalous incongruent words [45] , but the N400 has also been elicited by incongruent/unexpected or infrequent/incomprehensible items presented as drawings [46] , spoken or written language, pictures, and videos [47] , [48] . An interesting ERP study found an anterior N3 that was followed by a centro/parietal N400 [25] (Van Elk et al., 2008); in this study, the subjects prepared meaningful or meaningless actions that were performed with objects and provided semantic categorization responses before executing the actions. Interestingly, the scalp distribution of the N400 effects for action-related body parts (the words eye and mouth ) for meaningful actions was different than that of the effects of action-unrelated body parts. More specifically, a classical N400 effect with a posterior distribution was found for the comparison between action-unrelated and action-related body parts, whereas an anterior N400 effect was found for object-incongruent compared to object-congruent words.

It has been noted that the N400 tends to have a more anterior distribution when elicited by pictures or actions than when elicited by words [22] , [49] – [51] . These anterior negativities in the range of the N400 are assumed to reflect image- or action-specific semantic processing that is functionally similar to the processing of amodal semantic information that is indexed by the linguistic centro parietal N400. According to Amoruso and coworkers [48] the activation of motor and premotor regions during action comprehension could partially explain the frontal distribution of N400 responses to incongruent body patterns or movements actions that have been observed in action processing studies.

Previous studies have linked the emerging of an anterior N400 to incongruent gestures as reflecting the activation of motor/premotor regions representing action intentions (see Proverbio & Riva [22] for a review). More specifically, previous source localization data indicated premotor, motor, inferior parietal cortices, and orbito-frontal cortex as possible neural generators of these effects [23] . In the present sudy, in which observers had to process the emotional state of the acting person, LORETA solution explaining N400 difference-wave (cong.-incong.) pointed out an intense activity in the so-called emotional brain (limbic system and orbito-frontal cortex), plus in the premotor cortex, involved in the processing of the action’s meaning, consistently with previous neuroimaging studies using neutral actions, or solely hand-actions.

As for behavioral performance, in this study, accuracy data showed how it was easier to exclude that a EBL display was paired to an incongruent verbal description (2% of errors), rather than establishing a correspondence with the congruent pair (12% of errors). Although speed of response was the same, uncertainty was higher for congruent than incongruent trials. A similar pattern of results was found by Lima and coworkers [52] , in which action/gestures mismatches were recognized more accurately than action/gestures matches. However, findings from other studies are not consistent with this pattern, it depending on task requirements. For example, in Gu and coworkers’ study [53] , whose task was to recognize a facial expression by choosing among one of 6 Ekman’s emotional category, participants made significantly better and faster decisions when the faces were accompanied by bodies with congruent expressions than when they were accompanied by bodies with incongruent expressions. Indeed, in their case the match decision was based on a choice between 6 possibilities (anger, fear, surprise, disgust, happiness, sadness), whereas in our experiment the number of possibilities was unknown and unpredictable. Ultimately, it is not rare to find a better performance for mismatch than match decisions. In a very interesting fMRI study [54] , in which participants encoded the association between a person’s face and their home, and thereafter were asked to decide about the pair congruency, accuracy was found to be higher on mismatch than match trials. Importantly, the activity of CA 1 region of hippocampus was significantly greater for correct mismatch (correct rejections) than match (hits) trials. Indeed, activation of CA 1 was greater when participants encountered house-probes that violated their mnemonic predictions (correct mismatch) relative to probes that confirmed these predictions (correct match), thus providing the neural explanation of the increased behavioral performance for incongruent trials, as in our study.

In conclusion, the present results support previous findings regarding non-affective action processing [22] – [24] , [28] , [55] – [58] and report an activation of the frontoparietal system. Additionally, these results provide new evidence for the crucial role of the limbic and ventromedial orbitofrontal cortices in the recognition of emotional body language (EBL).

The ERP results indicate that face and body mimics undergo a prioritized processing (as early as 300 ms) that heavily involves the affective brain and that the output of this processing is rapidly compared with verbal information, which allows for regulation of communicative and social behavior that takes into account both linguistic and non-verbal cues. In this view, considering that we are conscious of our environment about half second past reality events (a person’s move, for example), the automatic processing of possible affective body signals at about 400 ms can be considered quick, and especially useful.

Author Contributions

Conceived and designed the experiments: AMP MC. Performed the experiments: MC MM. Analyzed the data: MC MM AZ. Contributed reagents/materials/analysis tools: AZ. Wrote the paper: AMP.

  • 1. Ekman P, Friesen WV (2003) Unmasking the Face: A Guide to Recognizing Emotions from Facial Clues. Los Altos, CA: ISHK.
  • 2. Duran ND, Dale R, Kello CT, Street CN, Richardson DC (2013) Exploring the movement dynamics of deception. Front Psychol 27: 4, 140.
  • View Article
  • Google Scholar
  • 5. Walker MB, Trimboli A (1989) Communicating effect: The role of verbal and nonverbal content. J Lang Soc Psychol 8 (3–4).
  • 20. Stanislavski C (1936) An Actor Prepares. London: Methuen, 1988.
  • 44. Wlotko EW, Federmeier KD (2013) Two sides of meaning: The scalp-recorded N400 reflects distinct contributions from the cerebral hemispheres. Front Psychol 4: doi: 10.3389/fpsyg.2013.00181.
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, body language in the brain: constructing meaning from expressive movement.

body language research paper

  • 1 Department of Psychiatry, University of British Columbia, Vancouver, BC, Canada
  • 2 Mental Health and Integrated Neurobehavioral Development Research Core, Child and Family Research Institute, Vancouver, BC, Canada
  • 3 Psychiatric Epidemiology and Evaluation Unit, Saint John of God Clinical Research Center, Brescia, Italy
  • 4 Department of Psychological and Brain Sciences, University of California, Santa Barbara, CA, USA

This fMRI study investigated neural systems that interpret body language—the meaningful emotive expressions conveyed by body movement. Participants watched videos of performers engaged in modern dance or pantomime that conveyed specific themes such as hope, agony, lust, or exhaustion. We tested whether the meaning of an affectively laden performance was decoded in localized brain substrates as a distinct property of action separable from other superficial features, such as choreography, kinematics, performer, and low-level visual stimuli. A repetition suppression (RS) procedure was used to identify brain regions that decoded the meaningful affective state of a performer, as evidenced by decreased activity when emotive themes were repeated in successive performances. Because the theme was the only feature repeated across video clips that were otherwise entirely different, the occurrence of RS identified brain substrates that differentially coded the specific meaning of expressive performances. RS was observed bilaterally, extending anteriorly along middle and superior temporal gyri into temporal pole, medially into insula, rostrally into inferior orbitofrontal cortex, and caudally into hippocampus and amygdala. Behavioral data on a separate task indicated that interpreting themes from modern dance was more difficult than interpreting pantomime; a result that was also reflected in the fMRI data. There was greater RS in left hemisphere, suggesting that the more abstract metaphors used to express themes in dance compared to pantomime posed a greater challenge to brain substrates directly involved in decoding those themes. We propose that the meaning-sensitive temporal-orbitofrontal regions observed here comprise a superordinate functional module of a known hierarchical action observation network (AON), which is critical to the construction of meaning from expressive movement. The findings are discussed with respect to a predictive coding model of action understanding.

Introduction

Body language is a powerful form of non-verbal communication providing important clues about the intentions, emotions, and motivations of others. In the course of our everyday lives, we pick up information about what people are thinking and feeling through their body posture, mannerisms, gestures, and the prosody of their movements. This intuitive social awareness is an impressive feat of neural integration; the cumulative result of activity in distributed brain systems specialized for coding a wide range of social information. Reading body language is more than just a matter of perception. It entails not only recognizing and coding socially relevant visual information, but also ascribing meaning to those representations.

We know a great deal about brain systems involved in the perception of facial expressions, eye movements, body movement, hand gestures, and goal directed actions, as well as those mediating affective, decision, and motor responses to social stimuli. What is still missing is an understanding of how the brain “reads” body language. Beyond the decoding of body motion, what are the brain substrates directly involved in extracting meaning from affectively laden body expressions? The brain has several functionally specialized structures and systems for processing socially relevant perceptual information. A subcortical pulvinar-superior colliculus-amygdala-striatal circuit mediates reflex-like perception of emotion from body posture, particularly fear, and activates commensurate reflexive motor responses ( Dean et al., 1989 ; Cardinal et al., 2002 ; Sah et al., 2003 ; de Gelder and Hadjikhani, 2006 ). A region of the occipital cortex known as the extrastriate body area (EBA) is sensitive to bodily form ( Bonda et al., 1996 ; Hadjikhani and de Gelder, 2003 ; Astafiev et al., 2004 ; Peelen and Downing, 2005 ; Urgesi et al., 2006 ). The fusiform gyrus of the ventral occipital and temporal lobes has a critical role in processing faces and facial expressions ( McCarthy et al., 1997 ; Hoffman and Haxby, 2000 ; Haxby et al., 2002 ). Posterior superior temporal sulcus is involved in perceiving the motion of biological forms in particular ( Allison et al., 2000 ; Pelphrey et al., 2005 ). Somatosensory, ventromedial prefrontal, premotor, and insular cortex contribute to one's own embodied awareness of perceived emotional states ( Adolphs et al., 2000 ; Damasio et al., 2000 ). Visuomotor processing in a functional brain network known as the action observation network (AON) codes observed action in distinct functional modules that together link the perception of action and emotional body language with ongoing behavioral goals and the formation of adaptive reflexes, decisions, and motor behaviors ( Grafton et al., 1996 ; Rizzolatti et al., 1996b , 2001 ; Hari et al., 1998 ; Fadiga et al., 2000 ; Buccino et al., 2001 ; Grézes et al., 2001 ; Grèzes et al., 2001 ; Ferrari et al., 2003 ; Zentgraf et al., 2005 ; Bertenthal et al., 2006 ; de Gelder, 2006 ; Frey and Gerry, 2006 ; Ulloa and Pineda, 2007 ). Given all we know about how bodies, faces, emotions, and actions are perceived, one might expect a clear consensus on how meaning is derived from these percepts. Perhaps surprisingly, while we know these systems are crucial to integrating perceptual information with affective and motor responses, how the brain deciphers meaning based on body movement remains unknown. The focus of this investigation was to identify brain substrates that decode meaning from body movement, as evidenced by meaning-specific neural processing that differentiates body movements conveying distinct expressions.

To identify brain substrates sensitive to the meaningful emotive state of an actor conveyed through body movement, we used repetition suppression (RS) fMRI. This technique identifies regions of the brain that code for a particular stimulus dimension (e.g., shape) by revealing substrates that have different patterns of neural activity in response to different attributes of that dimension (e.g., circle, square, triangle; Grill-Spector et al., 2006 ). When a particular attribute is repeated, synaptic activity and the associated blood oxygen level-dependent (BOLD) response decreases in voxels containing neuronal assemblies that code that attribute ( Wiggs and Martin, 1998 ; Grill-Spector and Malach, 2001 ). We have used this method previously to show that various properties of an action such as movement kinematics, object goal, outcome, and context-appropriateness of action mechanics are uniquely coded by different neural substrates within a parietal-frontal action observation network (AON; Hamilton and Grafton, 2006 , 2007 , 2008 ; Ortigue et al., 2010 ). Here, we applied RS-fMRI to identify brain areas in which activity decreased when the meaningful emotive theme of an expressive performance was repeated between trials. The results demonstrate a novel coding function of the AON—decoding meaning from body language.

Working with a group of professional dancers, we produced a set of video clips in which performers intentionally expressed a particular meaningful theme either through dance or pantomime. Typical themes consisted of expressions of hope, agony, lust, or exhaustion. The experimental manipulation of theme was studied independently of choreography, performer, or camera viewpoint, which allowed us to repeat the meaning of a movement sequence from one trial to another while varying physical movement characteristics and perceptual features. With this RS-fMRI design, a decrease in BOLD activity for repeated relative to novel themes (RS) could not be attributed to specific movements, characteristics of the performer, “low-level” visual features, or the general process of attending to body expressions. Rather, RS revealed brain areas in which specific voxel-wise neural population codes differentiated meaningful expressions based on body movement (Figure 1 ).

www.frontiersin.org

Figure 1. Manipulating trial sequence to induce RS in brain regions that decode body language . The order of video presentation was controlled such that themes depicted in consecutive videos were either novel or repeated. Each consecutive video clip was unique; repeated themes were always portrayed by different dancers, different camera angles, or both. Thus, RS for repeated themes was not the result of low-level visual features, but rather identified brain areas that were sensitive to the specific meaningful theme conveyed by a performance. In brain regions showing RS, a particular affective theme—hope, for example—will evoke a particular pattern of neural activity. A novel theme on the subsequent trial—illness, for instance—will trigger a different but equally strong pattern of neural activity in distinct cell assemblies, resulting in an equivalent BOLD response. In contrast, a repetition of the hopefulness theme on the subsequent trial will trigger activity in the same neural assemblies as the first trial, but to a lesser extent, resulting in a reduced BOLD response for repeated themes. In this way, regions showing RS reveal regions that support distinct patterns of neural activity in response to different themes.

Participants were scanned using fMRI while viewing a series of 10-s video clips depicting modern dance or pantomime performances that conveyed specific meaningful themes. Because each performer had a unique artistic style, the same theme could be portrayed using completely different physical movements. This allowed the repetition of meaning while all other aspects of the physical stimuli varied from trial to trial. We predicted that specific regions of the AON engaged by observing expressive whole body movement would show suppressed BOLD activation for repeated relative to novel themes (RS). Brain regions showing RS would reveal brain substrates directly involved in decoding meaning based on body movement.

The dance and pantomime performances used here conveyed expressive themes through movement, but did not rely on typified, canonical facial expressions to invoke particular affective responses. Rather, meaningful themes were expressed with unique artistic choreography while facial expressions were concealed with a classic white mime's mask. The result was a subtle stimulus set that promoted thoughtful, interpretive viewing that could not elicit reflex-like responses based on prototypical facial expressions. In so doing, the present study shifted the focus away from automatic affective resonance toward a more deliberate ascertainment of meaning from movement.

While dance and pantomime both expressed meaningful emotive themes, the quality of movement and the types of gestures used were different. Pantomime sequences used fairly mundane gestures and natural, everyday movements. Dance sequences used stylized gestures and interpretive, prosodic movements. The critical distinction between these two types of expressive movement is in the degree of abstraction in the metaphors that link movement with meaning (see Morris, 2002 for a detailed discussion of movement metaphors). Pantomime by definition uses gesture to mimic everyday objects, situations, and behavior, and thus relies on relatively concrete movement metaphors. In contrast, dance relies on more abstract movement metaphors that draw on indirect associations between qualities of movement and the emotions and thoughts it evokes in a viewer. We predicted that since dance expresses meaning more abstractly than pantomime, dance sequences would be more difficult to interpret than pantomimed sequences, and would likewise pose a greater challenge to brain processes involved in decoding meaning from movement. Thus, we predicted greater involvement of thematic decoding areas for danced than for pantomimed movement expressions. Greater RS for dance than pantomime could result from dance triggering greater activity upon a first presentation, a greater reduction in activity with a repeated presentation, or some combination of both. Given our prediction that greater RS for dance would be linked to interpretive difficulty, we hypothesized it would be manifested as an increased processing demand resulting in greater initial BOLD activity for novel danced themes.

Participants

Forty-six neurologically healthy, right-handed individuals (30 women, mean age = 24.22 years, range = 19–55 years) provided written informed consent and were paid for their participation. Performers also agreed in writing to allow the use of their images and videos for scientific purposes. The protocol was approved by the Office of Research Human Subjects Committee at the University of California Santa Barbara (UCSB).

Eight themes were depicted, including four danced themes (happy, hopeful, fearful, and in agony) and four pantomimed themes (in love, relaxed, ill, and exhausted). Performance sequences were choreographed and performed by four professional dancers recruited from the SonneBlauma Danscz Theatre Company (Santa Barbara, California; now called ArtBark International, http://www.artbark.org/ ). Performers wore expressionless white masks so body language was conveyed though gestural whole-body movement as opposed to facial expressions. To express each theme, performers adopted an affective stance and improvised a short sequence of modern dance choreography (two themes per performer) or pantomime gestures (two themes per performer). Each of the eight themes were performed by two different dancers and recorded from two different camera angles, resulting in four distinct videos representing each theme (32 distinct videos in total; clips available in Supplementary Materials online).

Behavioral Procedure

In a separate session outside the scanner either before or after fMRI data collection, an interpretation task measured observers' ability to discern the intended meaning of a performance (Figure 2 ). The interpretation task was carried out in a separate session to avoid confounding movement observation in the scanner with explicit decision-making and overt motor responses. Participants were asked to view each video clip and choose from a list of four options the theme that best corresponded with the movement sequence they had just watched. Responses were made by pressing one of four corresponding buttons on a keyboard. Two behavioral measures were collected to assess how well participants interpreted the intended meaning of expressive performances. Consistency scores reflected the proportion of observers' interpretations that matched the performer's intended expression. Response times indicated the time taken to make interpretive judgments. In order to encourage subjects to use their initial impressions and to avoid over-deliberating, the four response options were previewed briefly immediately prior to video presentation.

www.frontiersin.org

Figure 2. Experimental testing procedure . Participants completed a thematic interpretation task outside the scanner, either before or after the imaging session. Performance on this task allowed us to test whether there was a difference in how readily observers interpreted the intended meaning conveyed through dance or pantomime. Any performance differences on this explicit theme judgment task could help interpret the functional significance of observed differences in brain activity associated with passively viewing the two types of movement in the scanner.

For the interpretation task collected outside the scanner, videos were presented and responses collected on a Mac Powerbook G4 laptop programmed using the Psychtoolbox (v. 3.0.8) extension ( Brainard, 1997 ; Pelli and Brainard, 1997 ) for Mac OSX running under Matlab 7.5 R2007b (the MathWorks, Natick, MA). Each trial began with the visual presentation of a list of four theme options corresponding to four button press responses (“u,” “i,” “o,” or “p” keyboard buttons). This list remained on the screen for 3 s, the screen blanked for 750 ms, and then the movie played for 10 s. Following the presentation of the movie, the four response options were presented again, and remained on the screen until a response was made. Each unique video was presented twice, resulting in 64 trials total. Video order was randomized for each participant, and the response options for each trial included the intended theme and three randomly selected alternatives.

Neuroimaging Procedure

fMRI data were collected with a Siemens 3.0 T Magnetom Tim Trio system using a 12-channel phased array head coil. Functional images were acquired with a T2* weighted single shot gradient echo, echo-planar sequence sensitive to Blood Oxygen Level Dependent (BOLD) contrast (TR = 2 s; TE = 30 ms; FA = 90°; FOV = 19.2 cm). Each volume consisted of 37 slices acquired parallel to the AC–PC plane (interleaved acquisition; 3 mm thick with 0.5 mm gap; 3 × 3 mm in-plane resolution; 64 × 64 matrix).

Each participant completed four functional scanning runs lasting approximately 7.5 min while viewing danced or acted expressive movement sequences. While there were a total of eight themes in the stimulus set for the study, each scanning run depicted only two of those eight themes. Over the course of all four scanning runs, all eight themes were depicted. Trial sequences were arranged such that theme of a movement sequence was either novel or repeated with respect to the previous trial. This allowed for the analysis of BOLD response RS for repeated vs. novel themes. Each run presented 24 video clips (3 presentations of 8 unique videos depicting 2 themes × 2 dancers × 2 camera angles). Novel and repeated themes were intermixed within each scanning run, with no more than three sequential repetitions of the same theme. Two scanning runs depicted dance and two runs depicted pantomime performances. The order of runs was randomized for each participant. The experiment was controlled using Presentation software (version 13.0, Neurobehavioral Systems Inc, CA). Participants were instructed to focus on the movement performance while viewing the videos. No specific information about the themes portrayed or types of movement used was provided, and no motor responses were required.

For the behavioral data collected outside the scanner, mean consistency scores and mean response time (RT; ms) were computed for each participant. Consistency and RT were each submitted to an ANOVA with Movement Type (dance vs. pantomime) as a within-subjects factor using Stata/IC 10.0 for Macintosh.

Statistical analysis of the neuroimaging data was organized to identify: (1) brain areas responsive to the observation of expressive movement sequences, defined by BOLD activity relative to an implicit baseline, (2) brain areas directly involved in decoding meaning from movement, defined by RS for repeated themes, (3) brain areas in which processes for decoding thematic meaning varied as a function of abstractness, defined by greater RS for danced than pantomimed themes, and (4) the specific pattern of BOLD activity differences for novel and repeated themes as a function of danced or pantomimed movements in regions showing greater RS for dance.

The fMRI data were analyzed using Statistical Parametric Mapping software (SPM5, Wellcome Department of Imaging Neuroscience, London; www.fil.ion.ucl.ac.uk/spm ) implemented in Matlab 7.5 R2007b (The MathWorks, Natick, MA). Individual scans were realigned, slice-time corrected and spatially normalized to the Montreal Neurological Institute (MNI) template in SPM5 with a resampled resolution of 3 × 3 × 3 mm. A smoothing kernel of 8 mm was applied to the functional images. A general linear model was created for each participant using SPM5. Parameter estimates of event-related BOLD activity were computed for novel and repeated themes depicted by danced and pantomimed movements, separately for each scanning run, for each participant.

Because the intended theme of each movement sequence was not expressed at a discrete time point but rather throughout the duration of the 10 s video clip, the most appropriate hemodynamic response function (HRF) with which to model the BOLD response at the individual level was determined empirically prior to parameter estimation. Of interest was whether the shape of the BOLD response to these relatively long video clips differed from the canonical HRF typically implemented in SPM. The shape of the BOLD response was estimated for each participant by modeling a finite impulse response function ( Ollinger et al., 2001 ). Each trial was represented by a sequence of 12 consecutive TRs, beginning at the onset of each video clip. Based on this deconvolution, a set of beta weights describing the shape of the response over a 24 s interval was obtained for both novel and repeated themes depicted by both danced and pantomimed movement sequences. To determine whether adjustments should be made to the canonical HRF implemented in SPM, the BOLD responses of a set of 45 brain regions within a known AON were evaluated (see Table 1 for a complete list). To find the most representative shape of the BOLD response within the AON, deconvolved beta weights for each condition were averaged across sessions and collapsed by singular value decomposition analysis ( Golub and Reinsch, 1970 ). This resulted in a characteristic signal shape that maximally described the actual BOLD response in AON regions for both novel and repeated themes, for both danced and pantomimed sequences. This examination of the BOLD response revealed that its time-to-peak was delayed 4 s compared to the canonical HRF response curve typically implemented in SPM. That is, the peak of the BOLD response was reached at 8–10 s following stimulus onset instead of the canonical 4–6 s. Given this result, parameter estimation for conditions of interest in our main analysis was based on a convolution of the design matrix for each participant with a custom HRF that accounted for the observed 4 s delay. Time-to-peak of the HRF was adjusted from 6 to 10 s while keeping the same overall width and height of the canonical function implemented in SPM. Using this custom HRF, the 10 s video duration was modeled as usual in SPM by convolving the HRF with a 10 s boxcar function.

www.frontiersin.org

Table 1. The action observation network, as defined by previous investigations .

Second-level whole-brain analysis was conducted with SPM8 using a 2 × 2 random effects model with Movement Type and Repetition as within-subject factors using the weighted parameter estimates (contrast images) obtained at the individual level as data. A gray matter mask was applied to whole-brain contrast images prior to second-level analysis to remove white matter voxels from the analysis. Six second-level contrasts were computed, including (1) expressive movement observation (BOLD relative to baseline), (2) dance observation effect (danced sequences > pantomimed sequences), (3) pantomime observation effect (pantomimed sequences > danced sequences), (4) RS (novel themes > repeated themes), (5) dance × repetition interaction (RS for dance > RS for pantomime), and (6) pantomime x repetition interaction (RS for pantomime > RS for dance). Following the creation of T-map images in SPM8, FSL was used to create Z-map images (Version 4.1.1; Analysis Group, FMRIB, Oxford, UK; Smith et al., 2004 ; Jenkinson et al., 2012 ). The results were thresholded at p < 0.05, cluster-corrected using FSL subroutines based on Gaussian random field theory ( Poldrack et al., 2011 ; Nichols, 2012 ). To examine the nature of the differences in RS between dance and pantomime, a mask image was created based on the corresponding cluster-thresholded Z-map of regions showing greater RS for dance, and the mean BOLD activity (contrast image values) was computed for novel and repeated dance and pantomime contrasts from each participant's first-level analysis. Mean BOLD activity measures were submitted to a 2 × 2 ANOVA with Movement Type (dance vs. pantomime) and Repetition (novel vs. repeat) as within-subjects factors using Stata/IC 10.0 for Macintosh.

In order to ensure that observed RS effects for repeated themes were not due to low-level kinematic effects, a motion tracking analysis of all 32 videos was performed using Tracker 4.87 software for Mac (written by Douglas Brown, distributed on the Open Source Physics platform, www.opensourcephysics.org ). A variety of motion parameters, including velocity, acceleration, momentum, and kinetic energy, were computed within the Tracker software based on semi-automated/supervised motion tracking of the top of the head, one hand, and one foot of each performer. The key question relevant to our results was whether there was a difference in motion between videos depicting novel and repeated themes. One factor ANOVAs for each motion parameter revealed no significant differences in coarse kinematic profiles between “novel” and “repeated” theme trials (all p 's > 0.05). This was not particularly surprising given that all videos were used for both novel and repeated themes, which were defined entirely based on trial sequence). In contrast, the comparison between danced and pantomimed themes did reveal significant differences in kinematic profiles. A 2 × 3 ANOVA with Movement Type (Dance, Pantomime) and Body Point (Hand, Head, Foot) as factors was conducted for each motion parameter (velocity, acceleration, momentum, and kinetic energy), and revealed greater motion energy on all parameters for the danced themes compared to the pantomimed themes (all p 's < 0.05). Any differences in RS between danced and pantomimed themes may therefore be attributed to differences in kinematic properties of body movement. Importantly, however, because there were no systematic differences in motion kinematics between novel and repeated themes, any RS effects for repeated themes could not be attributed to the effect of motion kinematics.

Figure 3 illustrates the behavioral results of the interpretation task completed outside the scanner. Participants had higher consistency scores for pantomimed movements than danced movements [ F (1, 42) = 42.06, p < 0.0001], indicating better transmission of the intended expressive meaning from performer to viewer. Pantomimed sequences were also interpreted more quickly than danced sequences [ F (1, 42) = 27.28, p < 0.0001], suggesting an overall performance advantage for pantomimed sequences.

www.frontiersin.org

Figure 3. Behavioral performance on the theme judgment task . Participants more readily interpreted pantomime than dance. This was evidenced by both greater consistency between the meaningful theme intended to be expressed by the performer and the interpretive judgments made by the observer (left), and faster response times (right). This pattern of results suggests that dance was more difficult to interpret than pantomime, perhaps owing to the use of more abstract metaphors to link movement with meaning. Pantomime, on the other hand, relied on more concrete, mundane sorts of movements that were more likely to carry meaningful associations based on observers' prior everyday experience. SEM, standard error of the mean.

Expressive Whole-body Movements Engage the Action Observation Network

Brain activity associated with the observation of expressive movement sequences was revealed by significant BOLD responses to observing both dance and pantomime movement sequences, relative to the inter-trial resting baseline. Figure 4 depicts significant activation ( p < 0.05, cluster corrected in FSL) rendered on an inflated cortical surface of the Human PALS-B12 Atlas ( Van Essen, 2005 ) using Caret (Version 5. 61; http://www.nitrc.org/projects/caret ; Van Essen et al., 2001 ). Table 2 presents the MNI coordinates for selected voxels within clusters active during movement observation, as labeled in Figure 4 . Region names were obtained from the Harvard-Oxford Cortical and Subcortical Structural Atlases ( Frazier et al., 2005 ; Desikan et al., 2006 ; Makris et al., 2006 ; Goldstein et al., 2007 ; Harvard Center for Morphometric Analysis; www.partners.org/researchcores/imaging/morphology_MGH.asp ), and Brodmann Area labels were obtained from the Juelich Histological Atlas ( Eickhoff et al., 2005 , 2006 , 2007 ), as implemented in FSL. Observation of body movement was associated with robust BOLD activation encompassing cortex typically associated with the AON, including fronto-parietal regions linked to the representation of action kinematics, goals, and outcomes ( Hamilton and Grafton, 2006 , 2007 ), as well as temporal, occipital, and insular cortex and subcortical regions including amygdala and hippocampus—regions typically associated with language comprehension ( Kirchhoff et al., 2000 ; Ni et al., 2000 ; Friederici et al., 2003 ) and socio-affective information processing and decision-making ( Anderson et al., 1999 ; Adolphs et al., 2003 ; Bechara et al., 2003 ; Bechara and Damasio, 2005 ).

www.frontiersin.org

Figure 4. Expressive performances engage the action observation network . Viewing expressive whole-body movement sequences engaged a distributed cortical action observation network ( p < 0.05, FWE corrected). Large areas of parietal, temporal, frontal, and insular cortex included somatosensory, motor, and premotor regions that have been considered previously to comprise a human “mirror neuron” system, as well as non-motor areas linked to comprehension, social perception, and affective decision-making. Number labels correspond to those listed in Table 2 , which provides anatomical names and voxel coordinates for areas of peak activation. Dotted line for regions 17/18 indicates medial temporal position not visible on the cortical surface.

www.frontiersin.org

Table 2. Brain regions showing a significant BOLD response while participants viewed expressive whole-body movement sequences .

The Action Observation Network “Reads” Body Language

To isolate brain areas that decipher meaning conveyed by expressive body movement, regions showing RS (reduced BOLD activity for repeated compared to novel themes) were identified. Since theme was the only stimulus dimension repeated systematically across trials for this comparison, decreased activation for repeated themes could not be attributed to physical features of the stimulus such as particular movements, performers, or camera viewpoints. Figure 5 illustrates brain areas showing significant suppression for repeated themes ( p < 0.05, cluster corrected in FSL). Table 3 presents the MNI coordinates for selected voxels within significant clusters. RS was found bilaterally on the rostral bank of the middle temporal gyrus extending into temporal pole and orbitofrontal cortex. There was also significant suppression in bilateral amygdala and insular cortex.

www.frontiersin.org

Figure 5. BOLD suppression (RS) reveals brain substrates for “reading” body language . Regions involved in decoding meaning in body language showing were isolated by testing for BOLD suppression when the intended theme of an expressive performance was repeated across trials. To identify regions showing RS, BOLD activity associated with novel themes was contrasted with BOLD activity associated with repeated themes ( p < 0.05, cluster corrected in FSL). Significantly greater activity for novel relative to repeated themes was evidence of RS. Given that the intended theme of a performance was the only element that was repeated between trials, regions showing RS revealed brain substrates that were sensitive to the specific meaning infused into a movement sequence by a performer. Number labels correspond to those listed in Table 3 , which provides anatomical names and voxel coordinates for key clusters showing significant RS. Blue shaded area indicates vertical extent of axial slices shown.

www.frontiersin.org

Table 3. Brain regions showing significant BOLD suppression for repeated themes ( p < 0.05, cluster corrected in FSL) .

Movement Abstractness Challenges Brain Substrates that Decode Meaning

The behavioral analysis indicated that interpreting danced themes was more difficult than interpreting pantomimed themes, as evidenced by lower consistency scores and greater RTs. Previous research indicates that greater difficulty discriminating a particular stimulus dimension is associated with greater BOLD suppression upon repetition of that dimension's attributes ( Hasson et al., 2006 ). To test whether greater difficulty decoding meaning from dance than pantomime would also be associated with greater RS in the present data, the magnitude of BOLD response suppression was compared between movement types. This was done with the Dance × Repetition interaction contrast in the second-level whole brain analysis, which revealed regions that had greater RS for dance than for pantomime. Figure 6 illustrates brain regions showing greater RS for themes portrayed through dance than pantomime ( p < 0.05, cluster corrected in FSL). Significant differences were entirely left-lateralized in superior and middle temporal gyri, extending into temporal pole and orbitofrontal cortex, and also present in laterobasal amygdala and the cornu ammonis of the hippocampus. Table 4 presents the MNI coordinates for selected voxels within significant clusters. The reverse Pantomime × Repetition interaction was also tested, but did not reveal any regions showing greater RS for pantomime than dance ( p > 0.05, cluster corrected in FSL).

www.frontiersin.org

Figure 6. Regions showing greater RS for dance than pantomime . RS effects were compared between movement types. This was implemented as an interaction contrast within our Movement Type × Repetition ANOVA design [(Novel Dance > Repeated Dance) > (Novel Pantomime > Repeated Pantomime)]. Greater RS for dance was lateralized to left hemisphere meaning-sensitive regions. The brain areas shown here have been linked previously to the comprehension of meaning in verbal language, suggesting the possibility they represent shared brain substrates for building meaning from both language and action. Number labels correspond to those listed in Table 4 , which provides anatomical names and voxel coordinates for key clusters showing significantly greater RS for dance. Blue shaded area indicates vertical extent of axial slices shown.

www.frontiersin.org

Table 4. Brain regions showing significantly greater RS for themes expressed through dance relative to themes expressed through pantomime ( p < 0.05, cluster corrected in FSL) .

In regions showing greater RS for dance than pantomime, mean BOLD responses for novel and repeated dance and pantomime conditions were computed across voxels for each participant based on their first-level contrast images. This was done to test whether the greater RS for dance was due to greater activity in the novel condition, lower activity in the repeated condition, or some combination of both. Figure 7 illustrates a pattern of BOLD activity across conditions demonstrates that the greater RS for dance was the result of greater initial BOLD activation in response to novel themes. The ANOVA results showed a significant Movement Type × Repetition interaction [ F (1, 42) = 7.83, p < 0.01], indicating that BOLD activity in response to novel danced themes was greater than BOLD activity for all other conditions in these regions.

www.frontiersin.org

Figure 7. Novel danced themes challenge brain substrates that decode meaning from movement . To determine the specific pattern of BOLD activity that resulted in greater RS for dance, average BOLD activity in these areas was computed for each condition separately. Greater RS for dance was driven by a larger BOLD response to novel danced themes. Considered together with behavioral findings indicating that dance was more difficult to interpret, greater RS for dance seems to result from a greater processing “challenge” to brain substrates involved in decoding meaning from movement. SEM, standard error of the mean.

This study was designed to reveal brain regions involved in reading body language—the meaningful information we pick up about the affective states and intentions of others based on their body movement. Brain regions that decoded meaning from body movement were identified with a whole brain analysis of RS that compared BOLD activity for novel and repeated themes expressed through modern dance or pantomime. Significant RS for repeated themes was observed bilaterally, extending anteriorly along middle and superior temporal gyri into temporal pole, medially into insula, rostrally into inferior orbitofrontal cortex, and caudally into hippocampus and amygdala. Together, these brain substrates comprise a functional system within the larger AON. This suggests strongly that decoding meaning from expressive body movement constitutes a dimension of action representation not previously isolated in studies of action understanding. In the following we argue that this embedding is consistent with the hierarchical organization of the AON.

Body Language as Superordinate in a Hierarchical Action Observation Network

Previous investigations of action understanding have identified the AON as a key a cognitive system for the organization of action in general, highlighting the fact that both performing and observing action rely on many of the same brain substrates ( Grafton, 2009 ; Ortigue et al., 2010 ; Kilner, 2011 ; Ogawa and Inui, 2011 ; Uithol et al., 2011 ; Grafton and Tipper, 2012 ). Shared brain substrates for controlling one's own action and understanding the actions of others are often taken as evidence of a “mirror neuron system” (MNS), following from physiological studies showing that cells in area F5 of the macaque monkey premotor cortex fired in response to both performing and observing goal-directed actions ( Pellegrino et al., 1992 ; Gallese et al., 1996 ; Rizzolatti et al., 1996a ). Since these initial observations were made regarding monkeys, there has been a tremendous effort to characterize a human analog of the MNS, and incorporate it into theories of not only action understanding, but also social cognition, language development, empathy, and neuropsychiatric disorders in which these faculties are compromised ( Gallese and Goldman, 1998 ; Rizzolatti and Arbib, 1998 ; Rizzolatti et al., 2001 ; Gallese, 2003 ; Gallese et al., 2004 ; Rizzolatti and Craighero, 2004 ; Iacoboni et al., 2005 ; Tettamanti et al., 2005 ; Dapretto et al., 2006 ; Iacoboni and Dapretto, 2006 ; Shapiro, 2008 ; Decety and Ickes, 2011 ). A fundamental assumption common to all such theories is that mirror neurons provide a direct neural mechanism for action understanding through “motor resonance,” or the simulation of one's own motor programs for an observed action ( Jacob, 2008 ; Oosterhof et al., 2013 ). One proposed mechanism for action understanding through motor resonance is the embodiment of sensorimotor associations between action goals and specific motor behaviors ( Mitz et al., 1991 ; Niedenthal et al., 2005 ; McCall et al., 2012 ). While the involvement of the motor system in a range of social, cognitive and affective domains is certainly worthy of focused investigation, and mirror neurons may well play an important role in supporting such “embodied cognition,” this by no means implies that mirror neurons alone can account for the ability to garner meaning from observed body movement.

Since the AON is a distributed cortical network that extends beyond motor-related brain substrates engaged during action observation, it is best characterized not as a homogeneous “mirroring” mechanism, but rather as a collection of functionally specific but interconnected modules that represent distinct properties of observed actions ( Grafton, 2009 ; Grafton and Tipper, 2012 ). The present results build on this functional-hierarchical model of the AON by incorporating meaningful expression as an inherent aspect of body movement that is decoded in distinct regions of the AON. In other words, the bilateral temporal-orbitofrontal regions that showed RS for repeated themes comprise a distinct functional module of the AON that supports an additional level of the action representation hierarchy. Such an interpretation is consistent with the idea that action representation is inherently nested, carried out within a hierarchy of part-whole processes for which higher levels depend on lower levels ( Cooper and Shallice, 2006 ; Botvinick, 2008 ; Grafton and Tipper, 2012 ). We propose that the meaning infused into the body movement of a person having a particular affective stance is decoded superordinately to more concrete properties of action, such as kinematics and object goals. Under this view, while decoding these representationally subordinate properties of action may involve motor-related brain substrates, decoding “body language” engages non-motor regions of the AON that link movement and meaning, relying on inputs from lower levels of the action representation hierarchy that provide information about movement kinematics, prosodic nuances, and dynamic inflections.

While the present results suggest that the temporal-orbitofrontal regions identified here as decoding meaning from emotive body movement constitute a distinct functional module within a hierarchically organized AON, it is important to note that these regions have not previously been included in anatomical descriptions of the AON. The present study, however, isolated a property of action representation that had not been previously investigated; so identifying regions of the AON not previously included in its functional-anatomic definition is perhaps not surprising. This underscores the important point that the AON is functionally defined, such that its apparent anatomical extent in a given experimental context depends upon the particular aspects of action representation that are engaged and isolable. Previous studies of another abstract property of action representation, namely intention understanding, also illustrate this point. Inferring the intentions of an actor engages medial prefrontal cortex, bilateral posterior superior temporal sulcus, and left temporo-parietal junction—non-motor regions of the brain typically associated with “mentalizing,” or thinking about the mental states of another agent ( Ansuini et al., 2015 ; Ciaramidaro et al., 2014 ). A key finding of this research is that intention understanding depends fundamentally on the integration of motor-related (“mirroring”) brain regions and non-motor (“mentalizing”) brain regions ( Becchio et al., 2012 ). The present results parallel this finding, and point to the idea that in the context of action representation, motor and non-motor brain areas are not two separate brain networks, but rather one integrated functional system.

Predictive Coding and the Construction of Meaning in the Action Observation Network

A critical question raised by the idea that the temporal-orbitofrontal brain regions in which RS was observed here constitute a superordinate, meaning-sensitive functional module of the AON is how activity in subordinate AON modules is integrated at this higher level to produce differential neural firing patterns in response to different meaningful body expressions. That is, what are the neural mechanisms underlying the observed sensitivity to meaning in body language, and furthermore, why are these mechanisms subject to adaptation through repetition (RS)? While the present results do not provide direct evidence to answer these questions, we propose that a “predictive coding” interpretation provides a coherent model of action representation ( Brass et al., 2007 ; Kilner and Frith, 2008 ; Brown and Brüne, 2012 ) that yields useful predictions about the neural processes by which meaning is decoded that would account for the observed RS effect. The primary mechanism invoked by a predictive coding framework of action understanding is recurrent feed-forward and feedback processing across the various levels of the AON, which supports a Bayesian system of predictive neural coding, feedback processes, and prediction error reduction at each level of action representation ( Friston et al., 2011 ). According to this model, each level of the action observation hierarchy generates predictions to anticipate neural activity at lower levels of the hierarchy. Predictions in the form of neural codes are sent to lower levels through feedback connections, and compared with actual subordinate neural representations. Any discrepancy between neural predictions and actual representations are coded as prediction error. Information regarding prediction error is sent through recurrent feed-forward projections to superordinate regions, and used to update predictive priors such that subsequent prediction error is minimized. Together, these Bayes-optimal neural ensemble operations converge on the most probable inference for representation at the superordinate level ( Friston et al., 2011 ) and, ultimately, action understanding based on the integration of representations at each level of the action observation hierarchy ( Chambon et al., 2011 ; Kilner, 2011 ).

A predictive coding account of the present results would suggest that initial feed-forward inputs from subordinate levels of the AON provided the superordinate temporal-orbitofrontal module with information regarding movement kinematics, prosody, gestural elements, and dynamic inflections, which, when integrated with other inputs based on prior experience, would provide a basis for an initial prediction about potential meanings of a body expression. This prediction would yield a generative neural model about the movement dynamics that would be expected given the predicted meaning of the observed body expression, which would be fed back to lower levels of the network that coded movement dynamics and sensorimotor associations. Predictive activity would be contrasted with actual representations as movement information was accrued throughout the performance, and the resulting prediction error would be utilized via feed-forward projections to temporal-orbitofrontal regions to update predictive codes regarding meaning and minimize subsequent prediction error. In this way, the meaningful affective theme being expressed by the performer would be converged upon through recurrent Bayes-optimal neural ensemble operations. Thus, meaning expressed through body language would be accrued iteratively in temporal-orbitofrontal regions by integrating neural representations of various facets of action decoded throughout the AON. Interestingly, and consistent with a model in which an iterative process accrued information over time, we observed that BOLD responses in AON regions peaked more slowly than expected based on SPM's canonical HRF as the videos were viewed over an extended (10 s) duration. Under an iterative predictive coding model, RS for repeated themes could be accounted for by reduced initial generative activity in temporal-orbitofrontal regions due to better constrained predictions about potential meanings conveyed by observed movement, more efficient convergence on an inference due to faster minimization of prediction error, or some combination of both of these mechanisms. The present results provide indirect evidence for the former account, in that more abstract, less constrained movement metaphors relied upon by expressive dance resulted in greater RS due to larger BOLD responses for novel themes relative to the more concrete, better-constrained associations conveyed by pantomime.

Shared Brain Substrates for Meaning in Action and Language

The middle temporal gyrus and superior temporal sulcus regions identified here as part of a functional module of the AON that “reads” body language have been linked previously to a variety of high-level linguistic domains related to understanding meaning. Among these are conceptual knowledge ( Lambon Ralph et al., 2009 ), language comprehension ( Hasson et al., 2006 ; Noppeney and Penny, 2006 ; Price, 2010 ), sensitivity to the congruency between intentions and actions, both verbal/conceptual ( Deen and McCarthy, 2010 ), and perceptual/implicit ( Wyk et al., 2009 ), as well as understanding abstract language and metaphorical descriptions of action ( Desai et al., 2011 ). While together these studies demonstrate that high-level linguistic processing involves bilateral superior and middle temporal regions, there is evidence for a general predominance of the left hemisphere in comprehending semantics ( Price, 2010 ), and a predominance of the right hemisphere in incorporating socio-emotional information and affective context ( Wyk et al., 2009 ). For example, brain atrophy associated with a primary progressive aphasia characterized by profound disturbances in semantic comprehension occurs bilaterally in anterior middle temporal regions, but is more pronounced in the left hemisphere ( Gorno-Tempini et al., 2004 ). In contrast, neural degeneration in right hemisphere orbitofrontal, insula, and anterior middle temporal regions is associated not only with semantic dementia but also deficits in socio-emotional sensitivity and regulation ( Rosen et al., 2005 ).

This hemispheric asymmetry in brain substrates associated with interpreting meaning in verbal language is paralleled in the present results, which not only link the same bilateral temporal-orbitofrontal brain substrates to comprehending meaning from affectively expressive body language, but also demonstrate a predominance of the left hemisphere in deciphering the particularly abstract movement metaphors conveyed by dance. This asymmetry was evident as greater RS for repeated themes for dance relative to pantomime, which was driven by a greater initial activation for novel themes, suggesting that these left-hemisphere regions were engaged more vigorously when decoding more abstract movement metaphors. Together, these results illustrate a striking overlap in the brain substrates involved in processing meaning in verbal language and decoding meaning from expressive body movement. This overlap suggests that a long-hypothesized evolutionary link between gestural body movement and language ( Hewes et al., 1973 ; Harnad et al., 1976 ; Rizzolatti and Arbib, 1998 ; Corballis, 2003 ) may be instantiated by a network of shared brain substrates for representing semiotic structure, which constitutes the informational scaffolding for building meaning in both language and gesture ( Lemke, 1987 ; Freeman, 1997 ; McNeill, 2012 ; Lhommet and Marsella, 2013 ). While speculative, under this view the temporal-orbitofrontal AON module for coding meaning observed may provide a neural basis for semiosis (the construction of meaning), which would lend support to the intriguing philosophical argument that meaning is fundamentally grounded in processes of the body, brain, and the social environment within which they are immersed ( Thibault, 2004 ).

Summary and Conclusions

The present results identify a system of temporal, orbitofrontal, insula, and amygdala brain regions that supports the meaningful interpretation of expressive body language. We propose that these areas reveal a previously undefined superordinate functional module within a known, stratified hierarchical brain network for action representation. The findings are consistent with a predictive coding model of action understanding, wherein the meaning that is imbued into expressive body movements through subtle kinematics and prosodic nuances is decoded as a distinct property of action via feed-forward and feedback processing across the levels of a hierarchical AON. Under this view, recurrent processing loops integrate lower-level representations of movement dynamics and socio-affective perceptual information to generate, evaluate, and update predictive inferences about expressive content that are mediated in a superordinate temporal-orbitofrontal module of the AON. Thus, while lower-level action representation in motor-related brain areas (sometimes referred to as a human “mirror neuron system”) may be a key step in the construction of meaning from movement, it is not these motor areas that code the specific meaning of an expressive body movement. Rather, we have demonstrated an additional level of the cortical action representation hierarchy in non-motor regions of the AON. The results highlight an important link between action representation and language, and point to the possibility of shared brain substrates for constructing meaning in both domains.

Author Contributions

CT, GS, and SG designed the experiment. CT and GS created stimuli, which included recruiting professional dancers and filming expressive movement sequences. GS carried out video editing. CT completed computer programming for experimental control and data analysis. GS and CT recruited participants and conducted behavioral and fMRI testing. CT and SG designed the data analysis and CT and GS carried it out. GS conducted a literature review, and CT wrote the paper with reviews and edits from SG.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

Research supported by the James S. McDonnell Foundation.

Supplementary Material

The Supplementary Material for this article can be found online at: http://dx.doi.org/10.6084/m9.figshare.1508616

Adolphs, R., Damasio, H., Tranel, D., Cooper, G., and Damasio, A. R. (2000). A role for somatosensory cortices in the visual recognition of emotion as revealed by three-dimensional lesion mapping. J. Neurosci. 20, 2683–2690.

PubMed Abstract | Google Scholar

Adolphs, R., Tranel, D., and Damasio, A. R. (2003). Dissociable neural systems for recognizing emotions. Brain Cogn. 52, 61–69. doi: 10.1016/S0278-2626(03)00009-5

PubMed Abstract | CrossRef Full Text | Google Scholar

Allison, T., Puce, A., and McCarthy, G. (2000). Social perception from visual cues: role of the STS region. Trends Cogn. Sci. 4, 267–278. doi: 10.1016/S1364-6613(00)01501-1

Anderson, S. W., Bechara, A., Damasio, H., Tranel, D., and Damasio, A. R. (1999). Impairment of social and moral behavior related to early damage in human prefrontal cortex. Nat. Neurosci. 2, 1032–1037. doi: 10.1038/14833

Ansuini, C., Cavallo, A., Bertone, C., and Becchio, C. (2015). Intentions in the brain: the unveiling of Mister Hyde. Neuroscientist 21, 126–135. doi: 10.1177/1073858414533827

Astafiev, S. V., Stanley, C. M., Shulman, G. L., and Corbetta, M. (2004). Extrastriate body area in human occipital cortex responds to the performance of motor actions. Nat. Neurosci. 7, 542–548. doi: 10.1038/nn1241

Becchio, C., Cavallo, A., Begliomini, C., Sartori, L., Feltrin, G., and Castiello, U. (2012). Social grasping: from mirroring to mentalizing. Neuroimage 61, 240–248. doi: 10.1016/j.neuroimage.2012.03.013

Bechara, A., and Damasio, A. R. (2005). The somatic marker hypothesis: a neural theory of economic decision. Games Econ. Behav. 52, 336–372. doi: 10.1016/j.geb.2004.06.010

CrossRef Full Text | Google Scholar

Bechara, A., Damasio, H., and Damasio, A. R. (2003). Role of the amygdala in decision making. Ann. N.Y. Acad. Sci. 985, 356–369. doi: 10.1111/j.1749-6632.2003.tb07094.x

Bertenthal, B. I., Longo, M. R., and Kosobud, A. (2006). Imitative response tendencies following observation of intransitive actions. J. Exp. Psychol. 32, 210–225. doi: 10.1037/0096-1523.32.2.210

Bonda, E., Petrides, M., Ostry, D., and Evans, A. (1996). Specific involvement of human parietal systems and the amygdala in the perception of biological motion. J. Neurosci. 16, 3737–3744.

Botvinick, M. M. (2008). Hierarchical models of behavior and prefrontal function. Trends Cogn. Sci. 12, 201–208. doi: 10.1016/j.tics.2008.02.009

Brainard, D. H. (1997). The psychophysics toolbox. Spat. Vis. 10, 433–436. doi: 10.1163/156856897X00357

Brass, M., Schmitt, R. M., Spengler, S., and Gergely, G. (2007). Investigating action understanding: inferential processes versus action simulation. Curr. Biol. 17, 2117–2121. doi: 10.1016/j.cub.2007.11.057

Brown, E. C., and Brüne, M. (2012). The role of prediction in social neuroscience. Front. Hum. Neurosci . 6:147. doi: 10.3389/fnhum.2012.00147

Buccino, G., Binkofski, F., Fink, G. R., Fadiga, L., Fogassi, L., Gallese, V., et al. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur. J. Neurosci. 13, 400–404. doi: 10.1046/j.1460-9568.2001.01385.x

Calvo-Merino, B., Glaser, D. E., Grèzes, J., Passingham, R. E., and Haggard, P. (2005). Action observation and acquired motor skills: an FMRI study with expert dancers. Cereb. Cortex 15, 1243. doi: 10.1093/cercor/bhi007

Cardinal, R. N., Parkinson, J. A., Hall, J., and Everitt, B. J. (2002). Emotion and motivation: the role of the amygdala, ventral striatum, and prefrontal cortex. Neurosci. Biobehav. Rev. 26, 321–352. doi: 10.1016/S0149-7634(02)00007-6

Chambon, V., Domenech, P., Pacherie, E., Koechlin, E., Baraduc, P., and Farrer, C. (2011). What are they up to? The role of sensory evidence and prior knowledge in action understanding. PLoS ONE 6:e17133. doi: 10.1371/journal.pone.0017133

Ciaramidaro, A., Becchio, C., Colle, L., Bara, B. G., and Walter, H. (2014). Do you mean me? Communicative intentions recruit the mirror and the mentalizing system. Soc. Cogn. Affect. Neurosci . 9, 909–916. doi: 10.1093/scan/nst062

Cooper, R. P., and Shallice, T. (2006). Hierarchical schemas and goals in the control of sequential behavior. Psychol. Rev. 113, 887–916. discussion 917–931. doi: 10.1037/0033-295x.113.4.887

Corballis, M. C. (2003). “From hand to mouth: the gestural origins of language,” in Language Evolution: The States of the Art , eds M. H. Christiansen and S. Kirby (Oxford University Press). Available online at: http://groups.lis.illinois.edu/amag/langev/paper/corballis03fromHandToMouth.html

PubMed Abstract

Cross, E. S., Hamilton, A. F. C., and Grafton, S. T. (2006). Building a motor simulation de novo : observation of dance by dancers. Neuroimage 31, 1257–1267. doi: 10.1016/j.neuroimage.2006.01.033

Cross, E. S., Kraemer, D. J. M., Hamilton, A. F. D. C., Kelley, W. M., and Grafton, S. T. (2009). Sensitivity of the action observation network to physical and observational learning. Cereb. Cortex 19, 315. doi: 10.1093/cercor/bhn083

Damasio, A. R., Grabowski, T. J., Bechara, A., Damasio, H., Ponto, L. L., Parvizi, J., et al. (2000). Subcortical and cortical brain activity during the feeling of self-generated emotions. Nat. Neurosci. 3, 1049–1056. doi: 10.1038/79871

Dapretto, M., Davies, M. S., Pfeifer, J. H., Scott, A. A., Sigman, M., Bookheimer, S. Y., et al. (2006). Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders. Nat. Neurosci. 9, 28–30. doi: 10.1038/nn1611

Dean, P., Redgrave, P., and Westby, G. W. M. (1989). Event or emergency? Two response systems in the mammalian superior colliculus. Trends Neurosci . 12, 137–147. doi: 10.1016/0166-2236(89)90052-0

Decety, J., and Ickes, W. (2011). The Social Neuroscience of Empathy . Cambridge, MA: MIT Press.

Google Scholar

Deen, B., and McCarthy, G. (2010). Reading about the actions of others: biological motion imagery and action congruency influence brain activity. Neuropsychologia 48, 1607–1615. doi: 10.1016/j.neuropsychologia.2010.01.028

de Gelder, B. (2006). Towards the neurobiology of emotional body language. Nat. Rev. Neurosci. 7, 242–249. doi: 10.1038/nrn1872

de Gelder, B., and Hadjikhani, N. (2006). Non-conscious recognition of emotional body language. Neuroreport 17, 583. doi: 10.1097/00001756-200604240-00006

Desai, R. H., Binder, J. R., Conant, L. L., Mano, Q. R., and Seidenberg, M. S. (2011). The neural career of sensory-motor metaphors. J. Cogn. Neurosci. 23, 2376–2386. doi: 10.1162/jocn.2010.21596

Desikan, R. S., Ségonne, F., Fischl, B., Quinn, B. T., Dickerson, B. C., Blacker, D., et al. (2006). An automated labeling system for subdividing the human cerebral cortex on MRI scans into gyral based regions of interest. Neuroimage 31, 968–980. doi: 10.1016/j.neuroimage.2006.01.021

Eickhoff, S. B., Heim, S., Zilles, K., and Amunts, K. (2006). Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. Neuroimage 32, 570–582. doi: 10.1016/j.neuroimage.2006.04.204

Eickhoff, S. B., Paus, T., Caspers, S., Grosbras, M. H., Evans, A. C., Zilles, K., et al. (2007). Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. Neuroimage 36, 511–521. doi: 10.1016/j.neuroimage.2007.03.060

Eickhoff, S. B., Stephan, K. E., Mohlberg, H., Grefkes, C., Fink, G. R., Amunts, K., et al. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage 25, 1325–1335. doi: 10.1016/j.neuroimage.2004.12.034

Fadiga, L., Fogassi, L., Gallese, V., and Rizzolatti, G. (2000). Visuomotor neurons: ambiguity of the discharge or motor perception? Int. J. Psychophysiol. 35, 165–177. doi: 10.1016/S0167-8760(99)00051-3

Ferrari, P. F., Gallese, V., Rizzolatti, G., and Fogassi, L. (2003). Mirror neurons responding to the observation of ingestive and communicative mouth actions in the monkey ventral premotor cortex. Eur. J. Neurosci. 17, 1703–1714. doi: 10.1046/j.1460-9568.2003.02601.x

Frazier, J. A., Chiu, S., Breeze, J. L., Makris, N., Lange, N., Kennedy, D. N., et al. (2005). Structural brain magnetic resonance imaging of limbic and thalamic volumes in pediatric bipolar disorder. Am. J. Psychiatry 162, 1256–1265. doi: 10.1176/appi.ajp.162.7.1256

Freeman, W. J. (1997). A neurobiological interpretation of semiotics: meaning vs. representation. IEEE Int. Conf. Syst. Man Cybern. Comput. Cybern. Simul. 2, 93–102. doi: 10.1109/ICSMC.1997.638197

Frey, S. H., and Gerry, V. E. (2006). Modulation of neural activity during observational learning of actions and their sequential orders. J. Neurosci. 26, 13194–13201. doi: 10.1523/JNEUROSCI.3914-06.2006

Friederici, A. D., Rüschemeyer, S.-A., Hahne, A., and Fiebach, C. J. (2003). The role of left inferior frontal and superior temporal cortex in sentence comprehension: localizing syntactic and semantic processes. Cereb. Cortex 13, 170–177. doi: 10.1093/cercor/13.2.170

Friston, K., Mattout, J., and Kilner, J. (2011). Action understanding and active inference. Biol. Cybern. 104, 137–60. doi: 10.1007/s00422-011-0424-z

Gallese, V. (2003). The roots of empathy: the shared manifold hypothesis and the neural basis of intersubjectivity. Psychopathology 36, 171–180. doi: 10.1159/000072786

Gallese, V., Fadiga, L., Fogassi, L., and Rizzolatti, G. (1996). Action recognition in the premotor cortex. Brain 119, 593. doi: 10.1093/brain/119.2.593

Gallese, V., and Goldman, A. (1998). Mirror neurons and the simulation theory of mind-reading. Trends Cogn. Sci. 2, 493–501. doi: 10.1016/S1364-6613(98)01262-5

Gallese, V., Keysers, C., and Rizzolatti, G. (2004). A unifying view of the basis of social cognition. Trends Cogn. Sci. 8, 396–403. doi: 10.1016/j.tics.2004.07.002

Goldstein, J. M., Seidman, L. J., Makris, N., Ahern, T., O'Brien, L. M., Caviness, V. S., et al. (2007). Hypothalamic abnormalities in Schizophrenia: sex effects and genetic vulnerability. Biol. Psychiatry 61, 935–945. doi: 10.1016/j.biopsych.2006.06.027

Golub, G. H., and Reinsch, C. (1970). Singular value decomposition and least squares solutions. Numer. Math. 14, 403–420. doi: 10.1007/BF02163027

Gorno-Tempini, M. L., Dronkers, N. F., Rankin, K. P., Ogar, J. M., Phengrasamy, L., Rosen, H. J., et al. (2004). Cognition and anatomy in three variants of primary progressive aphasia. Ann. Neurol. 55, 335–346. doi: 10.1002/ana.10825

Grafton, S. T. (2009). Embodied cognition and the simulation of action to understand others. Ann. N.Y. Acad. Sci. 1156, 97–117. doi: 10.1111/j.1749-6632.2009.04425.x

Grafton, S. T., Arbib, M. A., Fadiga, L., and Rizzolatti, G. (1996). Localization of grasp representations in humans by positron emission tomography. Exp. Brain Res. 112, 103–111. doi: 10.1007/BF00227183

Grafton, S. T., and Tipper, C. M. (2012). Decoding intention: a neuroergonomic perspective. Neuroimage 59, 14–24. doi: 10.1016/j.neuroimage.2011.05.064

Grèzes, J., Decety, J., and Grezes, J. (2001). Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis. Hum. Brain Mapp. 12, 1–19. doi: 10.1002/1097-0193(200101)12:1<1::AID-HBM10>3.0.CO;2-V

Grezes, J., Fonlupt, P., Bertenthal, B., Delon-Martin, C., Segebarth, C., Decety, J., et al. (2001). Does perception of biological motion rely on specific brain regions? Neuroimage 13, 775–785. doi: 10.1006/nimg.2000.0740

Grill-Spector, K., Henson, R., and Martin, A. (2006). Repetition and the brain: neural models of stimulus-specific effects. Trends Cogn. Sci. 10, 14–23. doi: 10.1016/j.tics.2005.11.006

Grill-Spector, K., and Malach, R. (2001). fMR-adaptation: a tool for studying the functional properties of human cortical neurons. Acta Psychol. 107, 293–321. doi: 10.1016/S0001-6918(01)00019-1

Hadjikhani, N., and de Gelder, B. (2003). Seeing fearful body expressions activates the fusiform cortex and amygdala. Curr. Biol. 13, 2201–2205. doi: 10.1016/j.cub.2003.11.049

Hamilton, A. F. C., and Grafton, S. T. (2006). Goal representation in human anterior intraparietal sulcus. J. Neurosci. 26, 1133. doi: 10.1523/JNEUROSCI.4551-05.2006

Hamilton, A. F. D. C., and Grafton, S. T. (2008). Action outcomes are represented in human inferior frontoparietal cortex. Cereb. Cortex 18, 1160–1168. doi: 10.1093/cercor/bhm150

Hamilton, A. F., and Grafton, S. T. (2007). “The motor hierarchy: from kinematics to goals and intentions,” in Sensorimotor Foundations of Higher Cognition: Attention and Performance , Vol. 22, eds P. Haggard, Y. Rossetti, and M. Kawato (Oxford: Oxford University Press), 381–402.

Hari, R., Forss, N., Avikainen, S., Kirveskari, E., Salenius, S., and Rizzolatti, G. (1998). Activation of human primary motor cortex during action observation: a neuromagnetic study. Proc. Natl. Acad. Sci. U.S.A. 95, 15061–15065. doi: 10.1073/pnas.95.25.15061

Harnad, S. R., Steklis, H. D., and Lancaster, J. (eds.). (1976). “Origins and evolution of language and speech,” in Annals of the New York Academy of Sciences (New York, NY: New York Academy of Sciences), 280.

Hasson, U., Nusbaum, H. C., and Small, S. L. (2006). Repetition suppression for spoken sentences and the effect of task demands. J. Cogn. Neurosci. 18, 2013–2029. doi: 10.1162/jocn.2006.18.12.2013

Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2002). Human neural systems for face recognition and social communication. Biol. Psychiatry 51, 59–67. doi: 10.1016/S0006-3223(01)01330-0

Hewes, G. W., Andrew, R. J., Carini, L., Choe, H., Gardner, R. A., Kortlandt, A., et al. (1973). Primate communication and the gestural origin of language [and comments and reply]. Curr. Anthropol. 14, 5–24. doi: 10.1086/201401

Hoffman, E. A., and Haxby, J. V. (2000). Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nat. Neurosci. 3, 80–84. doi: 10.1038/71152

Iacoboni, M., and Dapretto, M. (2006). The mirror neuron system and the consequences of its dysfunction. Nat. Rev. Neurosci. 7, 942–51. doi: 10.1038/nrn2024

Iacoboni, M., Molnar-Szakacs, I., Gallese, V., Buccino, G., Mazziotta, J. C., and Rizzolatti, G. (2005). Grasping the intentions of others with one's own mirror neuron system. PLoS Biol. 3:e79. doi: 10.1371/journal.pbio.0030079

Jacob, P. (2008). What do mirror neurons contribute to human social cognition? Mind Lang. 23, 190–223. doi: 10.1111/j.1468-0017.2007.00337.x

Jenkinson, M., Beckmann, C. F., Behrens, T. E. J., Woolrich, M. W., and Smith, S. M. (2012). Fsl. Neuroimage 62, 782–90. doi: 10.1016/j.neuroimage.2011.09.015

Kilner, J. M. (2011). More than one pathway to action understanding. Trends Cogn. Sci. 15, 352–37. doi: 10.1016/j.tics.2011.06.005

Kilner, J. M., and Frith, C. D. (2008). Action observation: inferring intentions without mirror neurons. Curr. Biol. 18, R32–R33. doi: 10.1016/j.cub.2007.11.008

Kirchhoff, B. A., Wagner, A. D., Maril, A., and Stern, C. E. (2000). Prefrontal-temporal circuitry for episodic encoding and subsequent memory. J. Neurosci. 20, 6173–6180.

Lambon Ralph, M. A., Pobric, G., and Jefferies, E. (2009). Conceptual knowledge is underpinned by the temporal pole bilaterally: convergent evidence from rTMS. Cereb. Cortex 19, 832–838. doi: 10.1093/cercor/bhn131

Lemke, J. L. (1987). “Strategic deployment of speech and action: a sociosemiotic analysis,” in Semiotics 1983: Proceedings of the Semiotic Society of America ‘Snowbird’ Conference , eds J. Evans and J. Deely (Lanham, MD: University Press of America), 67–79.

Lhommet, M., and Marsella, S. C. (2013). “Gesture with meaning,” in Intelligent Virtual Agents , eds Y. Nakano, M. Neff, A. Paiva, and M. Walker (Berlin; Heidelberg: Springer), 303–312. doi: 10.1007/978-3-642-40415-3_27

CrossRef Full Text

Makris, N., Goldstein, J. M., Kennedy, D., Hodge, S. M., Caviness, V. S., Faraone, S. V., et al. (2006). Decreased volume of left and total anterior insular lobule in schizophrenia. Schizophr. Res. 83, 155–171. doi: 10.1016/j.schres.2005.11.020

McCall, C., Tipper, C. M., Blascovich, J., and Grafton, S. T. (2012). Attitudes trigger motor behavior through conditioned associations: neural and behavioral evidence. Soc. Cogn. Affect. Neurosci. 7, 841–889. doi: 10.1093/scan/nsr057

McCarthy, G., Puce, A., Gore, J. C., and Allison, T. (1997). Face-specific processing in the human fusiform gyrus. J. Cogn. Neurosci. 9, 605–610. doi: 10.1162/jocn.1997.9.5.605

McNeill, D. (2012). How Language Began: Gesture and Speech in Human Evolution . Cambridge: Cambridge University Press. Available online at: https://scholar.google.ca/scholar?q=How+Language+Began+Gesture+and+Speech+in+Human+Evolution&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=-ezxVISFIdCboQS1q4KACQ&ved=0CBsQgQMwAA

Morris, D. (2002). Peoplewatching: The Desmond Morris Guide to Body Language . New York, NY: Vintage Books. Available online at: http://www.amazon.ca/Peoplewatching-Desmond-Morris-Guide-Language/dp/0099429780 (Accessed March 10, 2014).

Ni, W., Constable, R. T., Mencl, W. E., Pugh, K. R., Fulbright, R. K., Shaywitz, S. E., et al. (2000). An event-related neuroimaging study distinguishing form and content in sentence processing. J. Cogn. Neurosci. 12, 120–133. doi: 10.1162/08989290051137648

Nichols, T. E. (2012). Multiple testing corrections, nonparametric methods, and random field theory. Neuroimage 62, 811–815. doi: 10.1016/j.neuroimage.2012.04.014

Niedenthal, P. M., Barsalou, L. W., Winkielman, P., Krauth-Gruber, S., and Ric, F. (2005). Embodiment in attitudes, social perception, and emotion. Personal. Soc. Psychol. Rev. 9, 184–211. doi: 10.1207/s15327957pspr0903_1

Noppeney, U., and Penny, W. D. (2006). Two approaches to repetition suppression. Hum. Brain Mapp. 27, 411–416. doi: 10.1002/hbm.20242

Ogawa, K., and Inui, T. (2011). Neural representation of observed actions in the parietal and premotor cortex. Neuroimage 56, 728–35. doi: 10.1016/j.neuroimage.2010.10.043

Ollinger, J. M., Shulman, G. L., and Corbetta, M. (2001). Separating processes within a trial in event-related functional MRI: II. Analysis. Neuroimage 13, 218–229. doi: 10.1006/nimg.2000.0711

Oosterhof, N. N., Tipper, S. P., and Downing, P. E. (2013). Crossmodal and action-specific: neuroimaging the human mirror neuron system. Trends Cogn. Sci. 17, 311–338. doi: 10.1016/j.tics.2013.04.012

Ortigue, S., Sinigaglia, C., Rizzolatti, G., Grafton, S. T., and Rochelle, E. T. (2010). Understanding actions of others: the electrodynamics of the left and right hemispheres. A high-density EEG neuroimaging study. PLoS ONE 5:e12160. doi: 10.1371/journal.pone.0012160

Peelen, M. V., and Downing, P. E. (2005). Selectivity for the human body in the fusiform gyrus. J. Neurophysiol. 93, 603–608. doi: 10.1152/jn.00513.2004

Pellegrino, G., Fadiga, L., Fogassi, L., Gallese, V., Rizzolatti, G., and di Pellegrino, G. (1992). Understanding motor events: a neurophysiological study. Exp. Brain Res. 91, 176–180. doi: 10.1007/BF00230027

Pelli, D. G., and Brainard, D. H. (1997). The VideoToolbox software for visual psychophysics: transforming numbers into movies. Spat. Vis. 10, 433–436. doi: 10.1163/156856897X00366

Pelphrey, K. A., Morris, J. P., Michelich, C. R., Allison, T., and McCarthy, G. (2005). Functional anatomy of biological motion perception in posterior temporal cortex: an fMRI study of eye, mouth, and hand movements. Cereb. Cortex 15, 1866–1876. doi: 10.1093/cercor/bhi064

Poldrack, R. A., Mumford, J. A., and Nichols, T. E. (2011). Handbook of Functional MRI Data Analysis . New York, NY: Cambridge University Press. doi: 10.1017/cbo9780511895029

Price, C. J. (2010). The anatomy of language: a review of 100 fMRI studies published in 2009. Ann. N.Y. Acad. Sci. 1191, 62–88. doi: 10.1111/j.1749-6632.2010.05444.x

Mitz, A. R., Godschalk, M., and Wise, S. P. (1991). Learning-dependent neuronal activity in the premotor cortex: activity during the acquisition of conditional motor associations. J. Neurosci. 11, 1855–1872.

Rizzolatti, G., and Arbib, M. A. (1998). Language within our grasp. Trends Neurosci. 21, 188–194. doi: 10.1016/S0166-2236(98)01260-0

Rizzolatti, G., and Craighero, L. (2004). The mirror-neuron system. Annu. Rev. Neurosci. 27, 169–192. doi: 10.1146/annurev.neuro.27.070203.144230

Rizzolatti, G., Fadiga, L., Gallese, V., and Fogassi, L. (1996a). Premotor cortex and the recognition of motor actions. Cogn. brain Res. 3, 131–141. doi: 10.1016/0926-6410(95)00038-0

Rizzolatti, G., Fadiga, L., Matelli, M., Bettinardi, V., Paulesu, E., Perani, D., et al. (1996b). Localization of grasp representations in humans by PET: 1. Observation versus execution. Exp. Brain Res. 111, 246–252. doi: 10.1007/BF00227301

Rizzolatti, G., Fogassi, L., and Gallese, V. (2001). Neurophysiological mechanisms underlying the understanding and imitation of action. Nat. Rev. Neurosci. 2, 661–670. doi: 10.1038/35090060

Rosen, H. J., Allison, S. C., Schauer, G. F., Gorno-Tempini, M. L., Weiner, M. W., and Miller, B. L. (2005). Neuroanatomical correlates of behavioural disorders in dementia. Brain 128, 2612–2625. doi: 10.1093/brain/awh628

Sah, P., Faber, E. S. L., De Armentia, M. L., and Power, J. (2003). The amygdaloid complex: anatomy and physiology. Physiol. Rev. 83, 803–834. doi: 10.1152/physrev.00002.2003

Shapiro, L. (2008). Making sense of mirror neurons. Synthese 167, 439–456. doi: 10.1007/s11229-008-9385-8

Smith, S. M., Jenkinson, M., Woolrich, M. W., Beckmann, C. F., Behrens, T. E. J., Johansen-Berg, H., et al. (2004). Advances in functional and structural MR image analysis and implementation as FSL. Neuroimage 23(Suppl. 1), S208–S219. doi: 10.1016/j.neuroimage.2004.07.051

Tettamanti, M., Buccino, G., Saccuman, M. C., Gallese, V., Danna, M., Scifo, P., et al. (2005). Listening to action-related sentences activates fronto-parietal motor circuits. J. Cogn. Neurosci. 17, 273–281. doi: 10.1162/0898929053124965

Thibault, P. (2004). Brain, Mind and the Signifying Body: An Ecosocial Semiotic Theory . London: A&C Black. Available online at: https://scholar.google.ca/scholar?q=Brain,+Mind+and+the+Signifying+Body:+An+Ecosocial+Semiotic+Theory&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ei=Lf3xVOayBMK0ogSniYLwCA&ved=0CB0QgQMwAA

Tunik, E., Rice, N. J., Hamilton, A. F., and Grafton, S. T. (2007). Beyond grasping: representation of action in human anterior intraparietal sulcus. Neuroimage 36, T77–T86. doi: 10.1016/j.neuroimage.2007.03.026

Uithol, S., van Rooij, I., Bekkering, H., and Haselager, P. (2011). Understanding motor resonance. Soc. Neurosci. 6, 388–397. doi: 10.1080/17470919.2011.559129

Ulloa, E. R., and Pineda, J. A. (2007). Recognition of point-light biological motion: Mu rhythms and mirror neuron activity. Behav. Brain Res. 183, 188–194. doi: 10.1016/j.bbr.2007.06.007

Urgesi, C., Candidi, M., Ionta, S., and Aglioti, S. M. (2006). Representation of body identity and body actions in extrastriate body area and ventral premotor cortex. Nat. Neurosci. 10, 30–31. doi: 10.1038/nn1815

Van Essen, D. C. (2005). A Population-Average, Landmark- and Surface-based (PALS) atlas of human cerebral cortex. Neuroimage 28, 635–662. doi: 10.1016/j.neuroimage.2005.06.058

Van Essen, D. C., Drury, H. A., Dickson, J., Harwell, J., Hanlon, D., and Anderson, C. H. (2001). An integrated software suite for surface-based analyses of cerebral cortex. J. Am. Med. Inform. Assoc. 8, 443–459. doi: 10.1136/jamia.2001.0080443

Wiggs, C. L., and Martin, A. (1998). Properties and mechanisms of perceptual priming. Curr. Opin. Neurobiol. 8, 227–233. doi: 10.1016/S0959-4388(98)80144-X

Wyk, B. C. V., Hudac, C. M., Carter, E. J., Sobel, D. M., and Pelphrey, K. A. (2009). Action understanding in the superior temporal sulcus region. Psychol. Sci. 20, 771. doi: 10.1111/j.1467-9280.2009.02359.x

Zentgraf, K., Stark, R., Reiser, M., Künzell, S., Schienle, A., Kirsch, P., et al. (2005). Differential activation of pre-SMA and SMA proper during action observation: effects of instructions. Neuroimage 26, 662–672. doi: 10.1016/j.neuroimage.2005.02.015

Keywords: action observation, dance, social neuroscience, fMRI, repetition suppression, predictive coding

Citation: Tipper CM, Signorini G and Grafton ST (2015) Body language in the brain: constructing meaning from expressive movement. Front. Hum. Neurosci . 9:450. doi: 10.3389/fnhum.2015.00450

Received: 28 March 2015; Accepted: 28 July 2015; Published: 21 August 2015.

Reviewed by:

Copyright © 2015 Tipper, Signorini and Grafton. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Christine M. Tipper, Mental Health and Integrated Neurobehavioral Development Research Core, Child and Family Research Institute, 3rd Floor - 938 West 28th Avenue, Vancouver, BC V5Z 4H4, Canada, [email protected]

Exploring Non-Verbal Communication and Body Language in Creating a Meaningful Life: Angela Merkel in Psychobiography

  • First Online: 01 January 2022

Cite this chapter

body language research paper

  • Ulrich Sollmann 5 , 6 &
  • Claude-Hélène Mayer 7 , 8  

Part of the book series: Sociocultural Psychology of the Lifecourse ((SPL))

2 Citations

Psychobiography is a well established methodological approach to explore the entire lifespan or specific events in the life of extraordinary individuals by using psychological theories.

This study uses a new psychobiographical focus, exploring the interplay of personality, non-verbal communication and body language to analyse the meaning of specific life events in the life of Angela Merkel, the contemporary German chancellor. It thereby contributes to political psychological psychobiographies of global women leaders through adult observation.

The study evaluates how Merkel uses non-verbal communication and body language to establish herself as a meaningful chancellor.

Methodologically, a hermeneutical research paradigm is used, with Merkel being purposefully sampled as the subject of research. This study draws on written accounts for analysis and interpretation of Merkel and refers to media scenario as a relevant methodological reference for adult observation, exploring Merkel as a public figure.

Accordingly, the study expands on previously used theories in psychobiography, while contributing to new and original research on Angela Merkel as one of the women leaders in the world.

Always be more than you appear and never appear to be more than you are. —Angela Merkel

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

body language research paper

Relationality, Autobiographical Voice, and the Posthumanist Paradox: Decentering the Human in Leslie Marmon Silko’s Life Writing

body language research paper

Re-writing the Subject and the Self: A Study of Hijra Life Writings

body language research paper

Autobiography as a Hermeneutic Practice of Reconciliation with Oneself

Angela Merkel in Koelbl, 1999 , p. 60.

Translated from German: “Ich bin ein Bewegungsidiot”, Angela Merkel in Koelbl, 1999 , 52.

Angela Merkel in Koelbl, 1999 , p. 48.

Angela Merkel in Koelbl, 1999 , p. 56.

Angela Merkel in Koelbl, 1999 , 53.

Angela Merkel in Koelbl, 1999 , 56.

Angela Merkel in Koelbl, 1999 , p. 53.

Allport, G. W. (1961). Pattern and growth in personality . Holt, Rinehart and Winston.

Google Scholar  

Altmann, U., et al. (2021). Movement and emotional facial expressions during the adult attachment interview: Interaction effects of attachment and anxiety disorder. Psychopathology, 54 (1), 39–46. In press.

Article   Google Scholar  

Amaechi, E. (2020). Women in management: Disrupting the prescribed gender norms. In B. Thakkar (Ed.), Paradigm shift in management philosophy . Palgrave Macmillan.

Baatjies, V. P. (2015). A psychobiography of Vuyiswa Mackonie . Master thesis, University of KwaZuluNatal, Durban, South Africa.

Ball, L., & Rutherford, A. (2008). Exceptional women: Their life and work in psychobiography. Psychology of Women Quarterly, 32 (4), 491–492.

Basson, M. (2020). Philip Kindred Dick: A psychobiography of the science fiction novelist . Doctoral dissertation, University of the Free State, Bloemfontein, South Africa.

Bauer, G. (2002). Körpersprache . https://www.youtube.com/watch?v=Scy80-mR1O8

Blair, T. (2010). Mein Weg . Bertelsmann.

de Waal, L. (2020). A psychobiographical study of Maya Angelou . Master thesis, Nelson Mandela Metropolitan University, Port Elisabeth, South Africa.

Die ZEIT. (2021). Entdecken. Kennen wir Sie? 18 February, pp. 55–56.

Doubell, M., & Struwig, M. (2014). Perceptions of factors influencing the career success of professional and business women in South Africa. South African Journal of Economic and Management Sciences, 17 (5) http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S2222-34362014000500001

Elms, A. C. (1994). Uncovering lives: The uneasy alliance of biography and psychology . Oxford University Press.

Elovitz, P. H. (2016). A psychobiographical and psycho-political comparison of Clinton and Trump. Journal of PsychoHistory, 44 (2), 90–113.

Forsa Polling Institute. (2003). Unpublished research project . Germany.

Fouché, J. P., du Plessis, R., & van Niekerk, R. (2017). Levinsonian seasons in the life of Steve Jobs: A psychobiographical case study. Indo-Pacific Journal of Phenomenology .

Fouché, J. P., & van Niekerk, R. (2010). Academic psychobiography in South Africa: Past, present and future. South Africa Journal of Psychology, 40 (4), 495–507.

Frey, S. (1999). Die Macht des Bildes . Hans Huber.

Hamburger Abendblatt. (2009). Die geheimen Zeichen der Kanzlerin . Hamburg.

Harisuker, N. (2016). Maya Angelou: A psychobiography . Master thesis in Psychology, University of Johannesburg, Johannesburg, South Africa.

Hingston, C. (2016). Rethinking African leadership: The need for more women leaders in Africa. Journal of Management & Administration, 1 , 72–82. https://hdl.handle.net/10520/EJC195105

Kertzer, J. D., & Tingley, D. (2018). Political psychology in international relations: Beyond the paradigms. Annual Review of Political Science, 21 , 319–339.

Koelbl, H. (1999). Spuren der Macht . Knesebeck.

Levinson, D. J. (1996). The seasons of a woman’s life . Ballantine Books.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry . Sage Publications Inc.

Book   Google Scholar  

Lischke, U. (2002). Bewegungsanalyse – Ein interaktiver Zugang zum Körperbild . Workshop-Unterlage, III. Wiener Symposium Psychoanalyse und Körper, 27–28.9.2002, Wien.

Lowen, A. (2008). Bioenergetik . Rowohlt Verlag.

Mayer, C.-H. (2021). Leading with faith: Angela Merkel in psychobiographical perspective. In E. Po, R. Klipatrick, & T. Pratt (Eds.), Reimagining faith and management: The impact of faith in the workplace (pp. 32–46). Routledge.

Chapter   Google Scholar  

Mayer, C.-H., & Kelley, J. L. (2021). The Heroine Archetype and Design Leadership in Bethlehem Tilahun Alemu – A psychobiological investigation. In S. K. Dhiman, J. F. Marques, J. Schmieder-Ramirey, & P. G. Malakyan (Eds.), Handbook of global leadership and followership: Integrating the best leadership theory and practice . Springer.

Mayer, C.-H., & Kovary, Z. (2019). New trends in psychobiography . Springer.

Mayer, C.-H., & van Niekerk, R. (2020). Creative minds of leaders in psychobiographical perspectives: Exploring the life and work of Christian Barnard and Angela Merkel. In S. Dhiman & J. Marquis (Eds.), New horizons in positive leadership and change (pp. 189–205). Springer.

Mayer, C.-H., van Niekerk, R., & Fouché, P. J. P. (2020). Holistic wellness in the life of Angela Merkel: A call to revise the Wheel of Wellness in the light of new Positive Psychology movements and socio-cultural changes. International Review of Psychiatry, 32 (7–8), 625–637.

McAdams, D. P. (2020). The strange case of Donald J. Trump. A psychological reckoning . Oxford University Press.

Musarrat Shaheen, S., & Pradhan, R. (2019). Sampling in qualitative research . https://www.semanticscholar.org/paper/Sampling-in-Qualitative-Research-Shaheen-Pradhan/83bf20b7af8258864950657d39182130184617c4

Mutuku, C. (2018). A psychobiography of Hillary Clinton . Grin Verlag.

Ottomeyer, K. (2009). Jörg Haider – Mythenbildung und Erbe . Drava Verlag.

Panelatti, A. F. (2018). Sylvia Plath: A psychobiographical study . Doctoral degree, University of the Free State, Bloemfontain, South Africa.

Ponterotto, J. (2017). A counsellor’s guide to conducting psychobiographical research. International Journal for the Advancement of Counselling, 39 (3), 249–263. https://doi.org/10.1007/s10447-017-9295-x

Ponterotto, J. G. (2015). In pursuit of William James’s McLean Hospital records: An inherent conflict between post-mortem privacy rights and advancing psychological science. Review of General Psychology, 19 (1), 96–105.

Prenter, T. (2015). A psychobiographical study of Charlize Theron . Master thesis, Rhodes University, South Africa.

Rustin, M. (2006). Infant observation research: What have we learned so far? Infant Observation, 9 (1), 35–62.

Schultz, W. T. (2005). Introducing psychobiography. In W. T. Schultz (Ed.), Handbook of psychobiography (pp. 3–18). Oxford University Press.

Sharma, D. (2016). Two progressive lives: Hillary Clinton and Ann Dunham. Clio’s Psyche, 23 (1), 37–42.

Sollmann, U. (1984). Bioenergetische Analyse . Synthesis Verlag.

Sollmann, U. (1995). Ein Vater, keine Tochter. Der Spiegel , 30 July.

Sollmann, U. (1997). Management by Körper . Orell Füssli.

Sollmann, U. (1999a). Schaulauf der Mächtigen – was uns die Körpersprache der Politiker verrät . Knaur Verlag.

Sollmann, U. (1999b). Management by Körper . Rowohlt.

Sollmann, U. (2002). Nicht anne Merkel packen . https://www.youtube.com/watch?v=48IXAUU3dRc

Sollmann, U. (2005). Interaktives Internetprojekt . www.charismakurve.de

Sollmann, U. (2006). Erwachsenenbeobachtung in der Politik. Psychotherapieforum, 14 (2), 91–95.

Sollmann, U. (2015). Einführung in Körpersprache und nonverbale Kommunikation (2nd ed.). Carl Auer.

Sollmann, U. (2016). Analysen zu Obama, Putin, Ma Yun . 身体语言及身体取向理疗入门. Peking.

Sollmann, U. (2017). Die nonverbale Wirkung von Rolle und Person. Coaching Magazin, 1 , 14–21.

Sueda, K., Mayer, C.-H., Kim, S., & Asai, A. (2020). Women in global leadership: Asian and African perspectives. The Aoyama Journal of International Politics, Economics and Communication, 104 , 39–59.

Terre Blanche, M., Durrheim, K., & Kelly, K. (2006). First steps in qualitative data analysis. In M. Terre Blanche, K. Durrheim, & D. Painter (Eds.), Research in practice: Applied methods for the social sciences (pp. 321–344). University of Cape Town Press.

Trautmann-Voigt. (2009). Frammatik der Gefühle . Schattauer Verlag.

Tschacher, W., & Bergomi, C. (2011). The implications of embodiment: Cognition and communication . Imprint Academic.

Tschacher, W., et al. (2021). Embodiment und Wirkfaktoren in Therapie. Beratung und Coaching, Organisationsberatung, Supervision und Coaching, 28 , 73–84.

van Niekerk, R., & Fouché, J. P. P. (2010). The career development of entrepreneurs: A psychobiography of Anton Rupert (1916–2006) . SIOPSA Conference, The Forum, The Campus, Bryanston, Johannesburg.

Wegner, B. R. (2020). Psychobiography is trending amongst psychologists. Review of Claude-Hélène Mayer & Zoltan Kovary, eds., New trends in psychobiography (Springer Nature Switzerland AG: Springer International Publishing, 2019). Clio’s Psyche, 27 (1), 140–144.

Yin, R. K. (2018). Case study research and applications: Design and methods (6th ed.). Sage.

Download references

Author information

Authors and affiliations.

Shanghai University of Political Science and Law (SHUPL), Shanghai, China

Ulrich Sollmann

Sino-German Academy of Psychotherapy, Bochum, Germany

Department of Industrial Psychology and People Management, University of Johannesburg, Johannesburg, South Africa

Claude-Hélène Mayer

Kulturwissenschaftliche Fakultät, Europa Universität Viadrina, Frankfurt (Oder), Germany

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

Department of Psychology, University of the Free State, Bloemfontein, South Africa

Paul, J.P. Fouché

Department of Industrial and Organisational Psychology, Nelson Mandela University, Port Elizabeth, South Africa

Roelf Van Niekerk

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Sollmann, U., Mayer, CH. (2021). Exploring Non-Verbal Communication and Body Language in Creating a Meaningful Life: Angela Merkel in Psychobiography. In: Mayer, CH., Fouché, P.J., Van Niekerk, R. (eds) Psychobiographical Illustrations on Meaning and Identity in Sociocultural Contexts . Sociocultural Psychology of the Lifecourse . Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-81238-6_4

Download citation

DOI : https://doi.org/10.1007/978-3-030-81238-6_4

Published : 01 January 2022

Publisher Name : Palgrave Macmillan, Cham

Print ISBN : 978-3-030-81237-9

Online ISBN : 978-3-030-81238-6

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Language, Gesture, and Emotional Communication: An Embodied View of Social Interaction

Elisa de stefani.

1 Dipartimento di Neuroscienze, Università di Parma, Parma, Italy

Doriana De Marco

2 Consiglio Nazionale delle Ricerche, Istituto di Neuroscienze, Parma, Italy

Spoken language is an innate ability of the human being and represents the most widespread mode of social communication. The ability to share concepts, intentions and feelings, and also to respond to what others are feeling/saying is crucial during social interactions. A growing body of evidence suggests that language evolved from manual gestures, gradually incorporating motor acts with vocal elements. In this evolutionary context, the human mirror mechanism (MM) would permit the passage from “doing something” to “communicating it to someone else.” In this perspective, the MM would mediate semantic processes being involved in both the execution and in the understanding of messages expressed by words or gestures. Thus, the recognition of action related words would activate somatosensory regions, reflecting the semantic grounding of these symbols in action information. Here, the role of the sensorimotor cortex and in general of the human MM on both language perception and understanding is addressed, focusing on recent studies on the integration between symbolic gestures and speech. We conclude documenting some evidence about MM in coding also the emotional aspects conveyed by manual, facial and body signals during communication, and how they act in concert with language to modulate other’s message comprehension and behavior, in line with an “embodied” and integrated view of social interaction.

Introduction

In the last years, the hypothesis of language as “embodied” in sensory and motor experience has been widely discussed in the field cognitive neuroscience.

In this review, we will firstly discuss recent behavioral and neurophysiological studies confirming the essential role of sensorimotor brain areas in language processing, facing the controversial issues and reviewing recent results that suggest an extended view of embodied theories.

We will discuss this hypothesis, providing evidences about the gestural origin of language, focusing on studies investigating the functional relation between manual gesture and speech and the neural circuits involved in their processing and production.

Finally, we will report evidences about the functional role of manual and facial gestures as communicative signals that, in concert with language, express emotional messages in the extended context of social interaction.

All these points provide evidences in favor of an integrated body/verbal communication system mediated by the mirror mechanism (MM).

What Is Embodied About Communication? the Involvement of Mirror Mechanism in Language Processing

It is well known that our thoughts are verbally expressed by symbols that have little or no physical relationship with objects, actions and feelings to which they refer. Knowing how linguistic symbols may have been associated with aspects of the real world represents one of the thorniest issues about the study of language and its evolution. In cognitive psychology, a classic debate has concerned how language is stored and recovered in the human brain.

According to the classical “amodal approach,” the concepts are expressed in a symbolic format ( Fodor, 1998 ; Mahon and Caramazza, 2009 ). The core assumption is that meanings of words are like a formal language, composed of arbitrary symbols, which represent aspects of the word ( Chomsky, 1980 ; Kintsch, 1998 ; Fodor, 2000 ); to understand a sentence, words are led back symbols that represent their meaning. In other terms, there would be an arbitrary relationship between the word and its referent ( Fodor, 1975 , 2000 ; Pinker, 1994 ; Burgess and Lund, 1997 ; Kintsch, 1998 ). Neuropsychological studies provide interesting evidence for the amodal nature of concept. In Semantic Dementia, for example, a brain damage in the temporal and adjacent areas results in an impairment of conceptual processing ( Patterson et al., 2007 ). A characteristic of this form of dementia is the degeneration of the anterior temporal lobe (ATL) that several imaging studies have highlighted to have a critical role in amodal conceptual representations (for a meta-analysis, see Visser et al., 2010 ).

In contrast, the embodied approaches to language propose that conceptual knowledge is grounded in body experience and in the sensorimotor systems ( Gallese and Lakoff, 2005 ; Barsalou, 2008 ; Casile, 2012 ) that are involved in forming and retrieving semantic knowledge ( Kiefer and Pulvermüller, 2012 ). These theories are supported by the discovery of mirror neurons (MNs), identified in the ventral pre-motor area (F5) of the macaque ( Gallese et al., 1996 ; Rizzolatti et al., 2014 ). MNs would be at the basis of both action comprehension and language understanding, constituting the neural substrate from which more sophisticated forms of communication evolved ( Rizzolatti and Arbib, 1998 ; Corballis, 2010 ). The MM is based on the process of motor resonance, which mediates action comprehension: when we observe someone performing an action, the visual input of the observed motor act reaches and activates the same fronto-parietal networks recruited during the execution of the same action ( Nelissen et al., 2011 ), permitting a direct access to the own motor representation. This mechanism was hypothesized to be extended to language comprehension, namely when we listen a word or a sentence related to an action (e.g., “grasping an apple”), allowing an automatic access to action/word semantics ( Glenberg and Kaschak, 2002 ; Pulvermüller, 2005 ; Fischer and Zwaan, 2008 ; Innocenti et al., 2014 ; Vukovic et al., 2017 ; Courson et al., 2018 ; Dalla Volta et al., 2018 ). This means that we comprehend words referring to concrete objects or actions directly accessing to their meaning through our sensorimotor experience ( Barsalou, 2008 ).

The sensorimotor activation in response to language processing was demonstrated by a large amount of neurophysiological studies. Functional magnetic resonance imaging (fMRI) studies demonstrated that seeing action verbs activated similar motor and premotor areas as when the participants actually move the effector associated with these verbs ( Buccino et al., 2001 ; Hauk et al., 2004 ). This “somatotopy” is one of the major argument supporting the idea that concrete concepts are grounded in action–perception systems of the brain ( Pulvermüller, 2005 ; Barsalou, 2008 ). Transcranial magnetic stimulation (TMS) results confirmed the somatotopy in human primary motor cortex (M1) demonstrating that the stimulation of the arms or legs M1 regions facilitated the recognition of action verbs involving movement of the respective extremities ( Pulvermüller, 2005 ; Innocenti et al., 2014 ).

However, one of the major criticism to the embodied theory is the idea that motor system plays an epiphenomenal role during language processing ( Mahon and Caramazza, 2008 ). In this view, the activations of motor system are not necessary to language understanding but they are the result of a cascade of spreading activations caused by the amodal semantic representation, or a consequence of explicit perceptual or motor imagery induced by the semantic tasks.

To address this point, further neurophysiological studies using time-resolved techniques such as high-density electroencephalography (EEG) or magnetoencefalography (MEG) indicated that the motor system is involved in an early time window corresponding to lexical-semantic access ( Pulvermüller, 2005 ; Hauk et al., 2008 ; Dalla Volta et al., 2014 ; Mollo et al., 2016 ), supporting a causal relationship between motor cortex activation and action verb comprehension. Interestingly, recent evidences ( Dalla Volta et al., 2018 ; García et al., 2019 ) has dissociated the contribution of motor system during early semantic access from the activation of lateral temporal-occipital areas in deeper semantic processing (e.g., categorization tasks) and multimodal reactivation.

Another outstanding question is raised by the controversial data about the processing of non-action language (i.e., “abstract” concepts). According to the Dual Coding Theory ( Paivio, 1991 ), concrete words are represented in both linguistic and sensorimotor-based systems, while abstract words would be represented only in the linguistic one. Neuroimaging studies support this idea showing that the processing of abstract words is associated with higher activations in the left IFG and the superior temporal cortex ( Binder et al., 2005 , 2009 ; Wang et al., 2010 ), areas commonly involved in linguistic processing. The Context Availability Hypothesis instead argues that abstract concepts have increased contextual ambiguity compared to concrete concepts ( Schwanenflugel et al., 1988 ). While concrete words would have direct relations with the objects or actions they refer to, abstract words can present multiple meanings and they needed more time to be understood ( Dalla Volta et al., 2014 , 2018 ; Buccino et al., 2019 ). This assumes that, they can be disambiguated if inserted in a “concrete context” which provides elements to narrow their meanings ( Glenberg et al., 2008 ; Boulenger et al., 2009 ; Scorolli et al., 2011 , 2012 ; Sakreida et al., 2013 ). Researches on action metaphors (e.g., “grasp an idea”) that are involved in both action and thinking, found an engagement of sensory-motor systems even when action language is figurative ( Boulenger et al., 2009 , 2012 ; Cuccio et al., 2014 ). Nevertheless, some studies observe motor activation only for literal, but not idiomatic sentences ( Aziz-Zadeh et al., 2006 ; Raposo et al., 2009 ).

In a recent TMS study, De Marco et al. (2018) tested the effect of context in modulating motor cortex excitability during abstract words semantic processing. The presentation of a congruent manual symbolic gesture as prime stimulus increased hand M1 excitability in the earlier phase of semantic processing and speeded word comprehension. These results confirmed that the semantic access to abstract concepts may be mediated by sensorimotor areas when the latter are grounded in a familiar motor context.

Gestures: a Bridge Between Language and Action

One of the major contribution in support of embodied cognition theory derived from the hypothesis of the motor origin of spoken language. Comparative neuroanatomical and neurophysiological studies sustain that F5 area in macaques is cytoarchitectonically comparable to Brodmann area 44 in the human brain (IFG), which is part of Broca’s area ( Petrides et al., 2005 , 2012 ). This area would be active not only in human action observation but also in language understanding ( Fadiga et al., 1995 , 2005 ; Pulvermüller et al., 2003 ), transforming heard phonemes in the corresponding motor representations of the same sound ( Fadiga et al., 2002 ; Gentilucci et al., 2006 ). In this way, similarly to what happen during action comprehension, the MM would directly link the sender and the receiver of a message (manual or vocal) in a communicative context. For this reason, it was hypothesized to be the ancestor system favoring the evolution of language ( Rizzolatti and Arbib, 1998 ).

Gentilucci and Corballis (2006) showed numerous empirical evidence that support the importance of the motor system in the origin of language. Specifically, the execution/observation of a grasp with the hand would activate a command to grasp with the mouth and vice-versa ( Gentilucci et al., 2001 , 2004 , 2012 ; Gentilucci, 2003 ; De Stefani et al., 2013a ). On the basis of these results the authors proposed that language evolved from arm postures that were progressively integrated with mouth articulation postures by mean of a double hand–mouth command system ( Gentilucci and Corballis, 2006 ). At some point of the evolutionary development the simple vocalizations and gestures inherited from our primate ancestors gave origin to a sophisticated system of language for interacting with others conspecifics ( Rizzolatti and Arbib, 1998 ; Arbib, 2003 , 2005 ; Gentilucci and Corballis, 2006 ; Armstrong and Wilcox, 2007 ; Fogassi and Ferrari, 2007 ; Corballis, 2010 ), where manual postures became associated to sounds.

Nowadays, during a face-to-face conversation, spoken language and communicative motor acts operate together in a synchronized way. The majority of gestures are produced in association with speech: in this way the message assumes a specific meaning. Nevertheless, a particular type of gesture, the symbolic gesture (i.e., OK or STOP), can be delivered in utter silence because it replaces the formalized, linguistic component of the expression present in speech ( Kendon, 1982 , 1988 , 2004 ). A process of conventionalization ( Burling, 1999 ) is responsible for transforming meaningless hand movements that accompany verbal communication (i.e., gesticulations, McNeill, 1992 ) into symbolic gestures, as well as string of letters may be transformed into a meaningful word. Symbolic gestures therefore represent the conjunction point between manual actions and spoken language ( Andric and Small, 2012 ; Andric et al., 2013 ). This leads to a great interest around the study of the interaction between symbolic gestures and speech, with the aim to shed light to the complex question about the role of the sensory-motor system in language comprehension.

A large amount of researches have claimed that, during language production and comprehension, gesture and spoken language are tightly connected ( Gunter and Bach, 2004 ; Bernardis and Gentilucci, 2006 ; Gentilucci et al., 2006 ; Gentilucci and Dalla Volta, 2008 ; Campione et al., 2014 ; De Marco et al., 2015 , 2018 ), suggesting that the neural systems for language understanding and action production are closely interactive ( Andric et al., 2013 ).

In line with the embodiment view of language, the theory of integrated communication systems ( McNeill, 1992 , 2000 ; Kita, 2000 ) is centered on the idea that gestures and spoken language comprehension and production are managed by a unique control system. Thus, gestures and spoken language are both represented in the motor domain and they necessarily interact with each other during their processing and production.

At the opposite, the theory of independent communication systems ( Krauss and Hadar, 1999 ; Barrett et al., 2005 ) claims that gestures and speech can work separately and are not necessarily integrated each other. Communication with gestures is described as an auxiliary system, evolved in parallel to language, that can be used when the primary system (language) is difficult to use or not intact. In this view, gesture-speech interplay is regarded as a semantic integration of amodal representations, taking place only after processing of the verbal and gestural messages have occurred separately. This hypothesis is primary supported by neuropsychological cases which reported that abnormal skilled learned purposive movements (limb apraxia) and language disorders (aphasia) are anatomically and functionally dissociable ( Kertesz et al., 1984 ; Papagno et al., 1993 ; Heilman and Rothi, 2003 ). However, limb apraxia often co-occuring with Broca’s Aphasia ( Albert et al., 2013 ) and difficulty in gesture-speech semantic integration was reported in aphasic patients ( Cocks et al., 2009 , 2018 ). Alongside clinical data, disrupting the activity in both left IFG and middle temporal gyrus (MTG) is found to impair gesture-speech integration ( Zhao et al., 2018 ).

Evidence in favor of the integrated system theory came from a series of behavioral and neurophysiological studies that have investigated the functional relationship between gestures and spoken language. The first evidence of the reciprocal influence of gestures and words during their production came from the study by Bernardis and Gentilucci (2006) , who showed how the vocal spectra measured during the pronunciation of one word (i.e., “hello”) was modified by the simultaneous production of the corresponding in meaning gesture (and vice-versa, the kinematics resulted inhibited). This interaction was found depending on the semantic relationship conveyed by the two stimuli ( Barbieri et al., 2009 ), and was replicated even when gestures and words were simply observed or presented in succession ( Vainiger et al., 2014 ; De Marco et al., 2015 ).

Neurophysiological studies showed controversial evidences about the core brain areas involved in gestures and words integration, that include different neural substrates as M1 ( De Marco et al., 2015 , 2018 ) IFG, MTG and superior temporal gyrus/sulcus (STG/S) ( Willems and Hagoort, 2007 ; Straube et al., 2012 ; Dick et al., 2014 ; Özyürek, 2014 ; Fabbri-Destro et al., 2015 ). However, IFG virtual lesion showed to disrupt gesture-speech integration effect ( Gentilucci et al., 2006 ), in accordance with the idea of human Broca’s area (and so the mirror circuit) as the core neural substrate of action, gesture and language processing and interplay ( Arbib, 2005 ). Partially in contrast, investigation of temporal dynamics of the integration processing by mean of combined EEG/fMRI techniques confirmed the activation of a left fronto-posterior-temporal network, but revealed a primary involvement of temporal areas ( He et al., 2018 ).

Finally, further results in favor of motor origin of language came from genetic research, since it was suggested that FOXP2 gene was involved both in verbal language production and upper limb movements coordination ( Teramitsu et al., 2004 ) opening the question about a possible molecular substrate linking speech with gesture (see Vicario, 2013 ).

In conclusion, a good amount of results evidenced a reciprocal influence between gesture and speech during their comprehension and production, showing overlapping activation of the MM neural systems (IFG) involved in action, gesture and language processing and interplay (see Table 1 ). Further studies should consider potential integration of neuroscience research with promising fields investigating the issue at molecular level.

Summary of main concepts, neural evidence, and future challenges about the theories explaining language semantic processing and evolution.

Motor Signs in Emotional Communication

The majority of studies that investigated the neural mechanism of hand gesture processing focused on the overlapping activations of words and gestures during their semantic comprehension and integration. However, it was shown that, gestural stimuli can convey more than semantic information, since they can also express emotional message. A first example came from the study of Shaver et al. (1987) which tried to identify behavioral prototype related to emotions (e.g., fist clenching is involved in the anger prototype). More recently, Givens (2008) showed that uplifted palms postures suggest a vulnerable or non-aggressive pose toward a conspecific.

However, beyond hand gestures investigations, emerging research about the role of motor system in emotion perception dealt with the study of mechanisms underlying body postures and facial gestures perception ( De Gelder, 2006 ; Niedenthal, 2007 ; Halberstadt et al., 2009 ; Calbi et al., 2017 ). Of note, specific connections with limbic circuit were found for mouth MNs ( Ferrari et al., 2017 ), evidencing the existence of a distinct pathway linked to the mouth/face motor control and communication/emotions encoding system. These neural evidences are in favor of a role of MM in the evolution and processing of emotional communication through the mouth/facial postures. As actions, gestures and language become messages that are understood by an observer without any cognitive mediation, the observation of a facial expression (such as disgust) would be immediately understood because it evokes the same representation in the insula of the individual observing it ( Wicker et al., 2003 ).

We propose that MM guides every-day interactions in recognizing emotional states in others, decoding body and non-verbal signals together with language, influencing and integrating the communicative content in the complexity of a social interaction.

Indeed, the exposure to congruent facial expressions was found to affect the recognition of hand gestures ( Vicario and Newman, 2013 ), as the observation of facial gesture interferes with the production of a mouth posture involving the same muscles ( Tramacere et al., 2018 ).

Moreover, emotional speech (prosody), facial expressions and hand postures were found to directly influence motor behavior during social interactions ( Innocenti et al., 2012 ; De Stefani et al., 2013b , 2016 ; Di Cesare et al., 2017 ).

Conclusion and Future Directions

Numerous behavioral and neurophysiological evidences are in favor of a crucial role of MM in language origin, as in decoding semantic and emotional aspects of communication.

However, some aspects need to be further investigated, and controversial results were found about the neural systems involved in semantics processing (especially for abstract language).

Nevertheless, a limitation emerges about experimental protocols which studied language in isolation, without considering the complexity of social communication. In other words, language should be considered always in relation to some backgrounds of a person mood, emotions, actions and events from which the things we are saying derive their meanings. Future studies should adopt a more ecological approach implementing research protocols that study language in association to congruent or incongruent non-verbal signals.

This will shed further light onto the differential roles that brain areas play and their domain specificity in understanding language and non-verbal signals as multiple channels of communication.

Furthermore, future research should consider to integrate behavioral and combined neurophysiological technique extending the sampling from typical to psychiatric population.

Indeed, new results will have also important implications for the comprehension of mental illness that were characterized by communication disorders and MM dysfunction as Autism Spectrum Disorder ( Oberman et al., 2008 ; Gizzonio et al., 2015 ), schizophrenia ( Sestito et al., 2013 ), and mood disorders ( Yuan and Hoff, 2008 ).

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

We would like to express our deep gratitude to our mentor, Prof. Maurizio Gentilucci, for his scientific guidance.

Funding. DD was supported by the project “KHARE” funded by INAIL to IN-CNR. ED was supported by Fondazione Cariparma.

  • Albert M. L., Goodglass H., Helm N. A., Rubens A. B., Alexander M. P. (2013). Clinical Aspects of Dysphasia. Berlin: Springer Science & Business Media. [ Google Scholar ]
  • Andric M., Small S. L. (2012). Gesture’s neural language. Front. Psychol. 3 : 99 . 10.3389/fpsyg.2012.00099 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Andric M., Solodkin A., Buccino G., Goldin-Meadow S., Rizzolatti G., Small S. L. (2013). Brain function overlaps when people observe emblems, speech, and grasping. Neuropsychologia 51 1619–1629. 10.1016/j.neuropsychologia.2013.03.022 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Arbib M. A. (2003). The Handbook of Brain Theory and Neural Networks. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Arbib M. A. (2005). From monkey-like action recognition to human language: an evolutionary framework for neurolinguistics. Behav. Brain Sci. 28 105–124. 10.1017/S0140525X05000038 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Armstrong D. F., Wilcox S. (2007). The Gestural Origin of Language. Oxford: Oxford University Press. [ Google Scholar ]
  • Aziz-Zadeh L., Wilson S. M., Rizzolatti G., Iacoboni M. (2006). Congruent embodied representations for visually presented actions and linguistic phrases describing actions. Curr. Biol. 16 1818–1823. 10.1016/j.cub.2006.07.060 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barbieri F., Buonocore A., Dalla Volta R. D., Gentilucci M. (2009). How symbolic gestures and words interact with each other. Brain Lang. 110 1–11. 10.1016/j.bandl.2009.01.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barrett A. M., Foundas A. L., Heilman K. M. (2005). Speech and gesture are mediated by independent systems. Behav. Brain Sci. 28 125–126. 10.1017/s0140525x05220034 [ CrossRef ] [ Google Scholar ]
  • Barsalou L. W. (2008). Grounded cognition. Annu. Rev. Psychol. 59 617–645. 10.1146/annurev.psych.59.103006.093639 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bernardis P., Gentilucci M. (2006). Speech and gesture share the same communication system. Neuropsychologia 44 178–190. 10.1016/j.neuropsychologia.2005.05.007 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Binder J. R., Desai R. H., Graves W. W., Conant L. L. (2009). Where is the semantic system? a critical review and meta-analysis of 120 functional neuroimaging studies. Cereb. Cortex 19 2767–2796. 10.1093/cercor/bhp055 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Binder J. R., Westbury C. F., McKiernan K. A., Possing E. T., Medler D. A. (2005). Distinct brain systems for processing concrete and abstract concepts. J. Cogn. Neurosci. 17 905–917. 10.1162/0898929054021102 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boulenger V., Hauk O., Pulvermüller F. (2009). Grasping ideas with the motor system: semantic somatotopy in idiom comprehension. Cereb. Cortex 19 1905–1914. 10.1093/cercor/bhn217 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Boulenger V., Shtyrov Y., Pulvermüller F. (2012). When do you grasp the idea? MEG evidence for instantaneous idiom understanding. Neuroimage 59 3502–3513. 10.1016/j.neuroimage.2011.11.011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Buccino G., Binkofski F., Fink G. R., Fadiga L., Fogassi L., Gallese V., et al. (2001). Action observation activates premotor and parietal areas in a somatotopic manner: an fMRI study. Eur. J. Neurosci. 13 400–404. 10.1111/j.1460-9568.2001.01385.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Buccino G., Colagè I., Silipo F., D’Ambrosio P. (2019). The concreteness of abstract language: an ancient issue and a new perspective. Brain Struct. Funct. 224 1385–1401. 10.1007/s00429-019-01851-1857 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Burgess C., Lund K. (1997). Modelling parsing constraints with high dimensional context space. Lang. Cogn. Process. 12 177–210. 10.1080/016909697386844 [ CrossRef ] [ Google Scholar ]
  • Burling R. (1999). “ Motivation, conventionalization, and arbitrariness in the originof language ,” in The Origins of Language: What NonhumanPrimates Can Tell Us , ed. King B. J. (Santa Fe, NM: School for American Research Press; ). [ Google Scholar ]
  • Calbi M., Angelini M., Gallese V., Umiltà M. A. (2017). Embodied body language”: an electrical neuroimaging study with emotional faces and bodies. Sci. Rep. 7 : 6875 . 10.1038/s41598-017-07262-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Campione G. C., De Stefani E., Innocenti A., De Marco D., Gough P. M., Buccino G., et al. (2014). Does comprehension of symbolic gestures and corresponding-in-meaning words make use of motor simulation? Behav. Brain Res. 259 297–301. 10.1016/j.bbr.2013.11.025 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Casile A. (2012). Mirror neurons (and beyond) in the macaque brain: an overview of 20 years of research. Neurosci. Lett. 540 3–14. 10.1016/j.neulet.2012.11.003 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chomsky N. (1980). Rules and representations. Behav. Brain Sci. 3 1–15. [ Google Scholar ]
  • Cocks N., Byrne S., Pritchard M., Morgan G., Dipper L. (2018). Integration of speech and gesture in aphasia. Int. J. Lang. Commun. Dis. 53 584–591. 10.1111/1460-6984.12372 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cocks N., Sautin L., Kita S., Morgan G., Zlotowitz S. (2009). Gesture and speech integration: an exploratory study of a man with aphasia. Int. J. Lang. Commun. Dis. 44 795–804. 10.1080/13682820802256965 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Corballis M. C. (2010). Mirror neurons and the evolution of language. Brain Lang. 112 25–35. 10.1016/j.bandl.2009.02.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Courson M., Macoir J., Tremblay P. (2018). A facilitating role for the primary motor cortex in action sentence processing. Behav. Brain Res. 15 244–249. 10.1016/j.bbr.2017.09.019 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Cuccio V., Ambrosecchia M., Ferri F., Carapezza M., Lo Piparo F., Fogassi L., et al. (2014). How the context matters. Literal and figurative meaning in the embodied language paradigm. PLoS One 9 : e115381 . 10.1371/journal.pone.0115381 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dalla Volta R., Fabbri-Destro M., Gentilucci M., Avanzini P. (2014). Spatiotemporal dynamics during processing of abstract and concrete verbs: an ERP study. Neuropsychologia 61 163–174. 10.1016/j.neuropsychologia.2014.06.019 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dalla Volta R. D., Avanzini P., De Marco D., Gentilucci M., Fabbri-Destro M. (2018). From meaning to categorization: the hierarchical recruitment of brain circuits selective for action verbs. Cortex 100 95–110. 10.1016/j.cortex.2017.09.012 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Gelder B. (2006). Towards the neurobiology of emotional body language. Nat. Rev. Neurosci. 7 : 242 . 10.1038/nrn1872 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Marco D., De Stefani E., Bernini D., Gentilucci M. (2018). The effect of motor context on semantic processing: a TMS study. Neuropsychologia 114 243–250. 10.1016/j.neuropsychologia.2018.05.003 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Marco D., De Stefani E., Gentilucci M. (2015). Gesture and word analysis: the same or different processes? NeuroImage 117 375–385. 10.1016/j.neuroimage.2015.05.080 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Stefani E., De Marco D., Gentilucci M. (2016). The effects of meaning and emotional content of a sentence on the kinematics of a successive motor sequence mimiking the feeding of a conspecific. Front. Psychol. 7 : 672 . 10.3389/fpsyg.2016.00672 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Stefani E., Innocenti A., De Marco D., Gentilucci M. (2013a). Concatenation of observed grasp phases with observer’s distal movements: a behavioural and TMS study. PLoS One 8 : e81197 . 10.1371/journal.pone.0081197 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • De Stefani E., Innocenti A., Secchi C., Papa V., Gentilucci M. (2013b). Type of gesture, valence, and gaze modulate the influence of gestures on observer’s behaviors. Front. Hum. Neurosci. 7 : 542 . 10.3389/fnhum.2013.00542 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Di Cesare G., De Stefani E., Gentilucci M., De Marco D. (2017). Vitality forms expressed by others modulate our own motor response: a kinematic study. Front. Hum. Neurosci. 11 : 565 . 10.3389/fnhum.2017.00565 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dick A. S., Mok E. H., Beharelle A. R., Goldin-Meadow S., Small S. L. (2014). Frontal and temporal contributions to understanding the iconic co-speech gestures that accompany speech. Hum. Brain Mapp. 35 900–917. 10.1002/hbm.22222 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fabbri-Destro M., Avanzini P., De Stefani E., Innocenti A., Campi C., Gentilucci M. (2015). Interaction between words and symbolic gestures as revealed by N400. Brain Topogr. 28 591–605. 10.1007/s10548-014-0392-4 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fadiga L., Craighero L., Buccino G., Rizzolatti G. (2002). Speech listening specifically modulates the excitability of tongue muscles: a TMS study. Eur. J. Neurosci. 15 399–402. 10.1046/j.0953-816x.2001.01874.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fadiga L., Craighero L., Olivier E. (2005). Human motor cortex excitability during the perception of others’ action. Curr. Opin. Neurobiol. 15 213–218. 10.1016/j.conb.2005.03.013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fadiga L., Fogassi L., Pavesi G., Rizzolatti G. (1995). Motor facilitation during action observation: a magnetic stimulation study. J. Neurophysiol. 73 2608–2611. 10.1152/jn.1995.73.6.2608 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferrari P. F., Gerbella M., Coudé G., Rozzi S. (2017). Two different mirror neuron networks: the sensorimotor (hand) and limbic (face) pathways. Neuroscience 358 300–315. 10.1016/j.neuroscience.2017.06.052 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fischer M. H., Zwaan R. A. (2008). Embodied language: a review of the role of the motor system in language comprehension. Q. J. Exp. Psychol. 61 825–850. 10.1080/17470210701623605 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Fodor A. D. (2000). The Mind Doesn’t Work that Way: The Scope and Limits of Computational Psychology. New York, NY: MIT Press. [ Google Scholar ]
  • Fodor J. A. (1975). The Language of Thought. Cambridge, MA: Harvard University Press. [ Google Scholar ]
  • Fodor J. A. (1998). Concepts: Where Cognitive Science Went Wrong. Oxford: Oxford University Press. [ Google Scholar ]
  • Fogassi L., Ferrari P. F. (2007). Mirror neurons and the evolution of embodied language. Curr. Dir. Psychol. Sci. 16 136–141. 10.1111/j.1467-8721.2007.00491.x [ CrossRef ] [ Google Scholar ]
  • Gallese V., Fadiga L., Fogassi L., Rizzolatti G. (1996). Action recognition in the premotor cortex. Brain 119 593–609. 10.1093/brain/119.2.593 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gallese V., Lakoff G. (2005). The Brain’s concepts: the role of the sensory-motor system in conceptual knowledge. Cogn. Neuropsychol. 22 455–479. 10.1080/02643290442000310 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • García A. M., Moguilner S., Torquati K., García-Marco E., Herrera E., Muñoz E., et al. (2019). How meaning unfolds in neural time: embodied reactivations can precede multimodal semantic effects during language processing. NeuroImage 197 439–449. 10.1016/j.neuroimage.2019.05.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentilucci M. (2003). Grasp observation influences speech production. Eur. J. Neurosci. 17 179–184. 10.1046/j.1460-9568.2003.02438.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentilucci M., Benuzzi F., Gangitano M., Grimaldi S. (2001). Grasp with hand and mouth: a kinematic study on healthy subjects. J. Neurophysiol. 86 1685–1699. 10.1152/jn.2001.86.4.1685 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentilucci M., Bernardis P., Crisi G., Dalla Volta R. (2006). Repetitive transcranial magnetic stimulation of Broca’s area affects verbal responses to gesture observation. J. Cogn. Neurosci. 18 1059–1074. 10.1162/jocn.2006.18.7.1059 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentilucci M., Campione G. C., De Stefani E., Innocenti A. (2012). Is the coupled control of hand and mouth postures precursor of reciprocal relations between gestures and words? Behav. Brain Res. 233 130–140. 10.1016/j.bbr.2012.04.036 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentilucci M., Corballis M. C. (2006). From manual gesture to speech: a gradual transition. Neurosci. Biobehav. Rev. 30 949–960. 10.1016/j.neubiorev.2006.02.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentilucci M., Dalla Volta R. (2008). Spoken language and arm gestures are controlled by the same motor control system. Q. J. Exp. Psychol. 2006 944–957. 10.1080/17470210701625683 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gentilucci M., Stefanini S., Roy A. C., Santunione P. (2004). Action observation and speech production: study on children and adults. Neuropsychologia 42 1554–1567. 10.1016/j.neuropsychologia.2004.03.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Givens D. B. (2008). The Nonverbal Dictionary of Gestures, Signs, and Body Language. Washington: Center for nonverbal studies. [ Google Scholar ]
  • Gizzonio V., Avanzini P., Campi C., Orivoli S., Piccolo B., Cantalupo G., et al. (2015). Failure in pantomime action execution correlates with the severity of social behavior deficits in children with autism: a praxis study. J. Autism Dev. Dis. 45 3085–3097. 10.1007/s10803-015-2461-2 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glenberg A. M., Kaschak M. P. (2002). Grounding language in action. Psychon. Bull. Rev. 9 558–565. 10.3758/BF03196313 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glenberg A. M., Sato M., Cattaneo L. (2008). Use-induced motor plasticity affects the processing of abstract and concrete language. Curr. Biol. 18 R290–R291. 10.1016/j.cub.2008.02.036 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gunter T. C., Bach P. (2004). Communicating hands: ERPs elicited by meaningful symbolic hand postures. Neurosci. Lett. 372 52–56. 10.1016/j.neulet.2004.09.011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Halberstadt J., Winkielman P., Niedenthal P. M., Dalle N. (2009). Emotional conception: how embodied emotion concepts guide perception and facial action. Psychol. Sci. 20 1254–1261. 10.1111/j.1467-9280.2009.02432.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hauk O., Johnsrude I., Pulvermüller F. (2004). Somatotopic representation of action words in human motor and premotor cortex. Neuron 41 301–307. 10.1016/s0896-6273(03)00838-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hauk O., Shtyrov Y., Pulvermüller F. (2008). The time course of action and action-word comprehension in the human brain as revealed by neurophysiology. J. Physiol. Paris 102 50–58. 10.1016/j.jphysparis.2008.03.013 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • He Y., Steines M., Sommer J., Gebhardt H., Nagels A., Sammer G., et al. (2018). Spatial–temporal dynamics of gesture–speech integration: a simultaneous EEG-fMRI study. Brain Struct. Funct. 223 3073–3089. 10.1007/s00429-018-1674-1675 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Heilman K. M., Rothi L. J. G. (2003). “ Apraxia ,” in Clinical Neuropsychology , eds Heilman K. M., Valenstein E. (New York, NY: Oxford University Press; ). [ Google Scholar ]
  • Innocenti A., De Stefani E., Bernardi N. F., Campione G. C., Gentilucci M. (2012). Gaze direction and request gesture in social interactions. PLoS one 7 : e36390 . 10.1371/journal.pone.0036390 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Innocenti A., De Stefani E., Sestito M., Gentilucci M. (2014). Understanding of action-related and abstract verbs in comparison: a behavioral and TMS study. Cogn. Process. 15 85–92. 10.1007/s10339-013-0583-z [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kendon A. (1982). “ Discussion of papers on basic phenomena of interaction: plenary session, subsection 4, research committee on sociolinguistics ,” in Proceedings of the Xth International Congress of Sociology , Mexico. [ Google Scholar ]
  • Kendon A. (1988). “ How gestures can become like words ,” in Crosscultural Perspectives in Nonverbal Communication , ed. Poyatos F. (Toronto: Hogrefe Publishers; ). [ Google Scholar ]
  • Kendon A. (2004). Gesture: Visible Action as Utterance. Cambridge: Cambridge University Press. [ Google Scholar ]
  • Kertesz A., Ferro J. M., Shewan C. M. (1984). Apraxia and aphasia: the functional-anatomical basis for their dissociation. Neurology 34 40–40. 10.1212/wnl.34.1.40 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kiefer M., Pulvermüller F. (2012). Conceptual representations in mind and brain: theoretical developments, current evidence and future directions. Cortex 48 805–825. 10.1016/j.cortex.2011.04.006 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kintsch W. (1998). Comprehension: A Paradigm for Cognition. Cambridge: Cambridge University Press. [ Google Scholar ]
  • Kita S. (2000). “ Representational gestures help speaking ,” in Language and Gesture , ed. McNeill D. (Cambridge: Cambridge University Press; ). [ Google Scholar ]
  • Krauss R. M., Hadar U. (1999). “ The role of speech-related arm/hand gestures in word retrieval ,” in Gesture, Speech, and Sign , eds Messing L. S., Campbell R. (Oxford: University of Oxford; ). [ Google Scholar ]
  • Mahon B. Z., Caramazza A. (2008). A critical look at the embodied cognition hypothesis and a new proposal for grounding conceptual content. J. Physiol. Paris 102 59–70. 10.1016/j.jphysparis.2008.03.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Mahon B. Z., Caramazza A. (2009). Concepts and categories: a cognitive neuropsychological perspective. Annu. Rev. Psychol. 60 27–51. 10.1146/annurev.psych.60.110707.163532 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • McNeill D. (1992). Hand and Mind: What Gestures Reveal about Thought. Chicago, IL: University of Chicago Press. [ Google Scholar ]
  • McNeill D. (2000). Language and Gesture. Cambridge: Cambridge University Press. [ Google Scholar ]
  • Mollo G., Pulvermüller F., Hauk O. (2016). Movement priming of EEG/MEG brain responses for action-words characterizes the link between language and action. Cortex 74 262–276. 10.1016/j.cortex.2015.10.021 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Nelissen K., Borra E., Gerbella M., Rozzi S., Luppino G., Vanduffel W., et al. (2011). Action observation circuits in the macaque monkey cortex. J. Neurosci. 31 3743–3756. 10.1523/JNEUROSCI.4803-10.2011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Niedenthal P. M. (2007). Embodying emotion. Science 316 1002–1005. 10.1126/science.1136930 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Oberman L. M., Ramachandran V. S., Pineda J. A. (2008). Modulation of mu suppression in children with autism spectrum disorders in response to familiar or unfamiliar stimuli: the mirror neuron hypothesis. Neuropsychologia 46 1558–1565. 10.1016/j.neuropsychologia.2008.01.010 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Özyürek A. (2014). Hearing and seeing meaning in speech and gesture: insights from brain and behaviour. Philos. Trans. R. Soc. B Biol. Sci. 369 : 20130296 . 10.1098/rstb.2013.0296 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Paivio A. (1991). Dual coding theory: retrospect and current status. Can. J. Psychol. 45 : 255 . 10.1037/h0084295 [ CrossRef ] [ Google Scholar ]
  • Papagno C., Della Sala S., Basso A. (1993). Ideomotor apraxia without aphasia and aphasia without apraxia: the anatomical support for a double dissociation. J. Neurol. Neurosurg. Psychiatr. 56 286–289. 10.1136/jnnp.56.3.286 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Patterson K., Nestor P. J., Rogers T. T. (2007). Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 8 : 976 . 10.1038/nrn2277 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Petrides M., Cadoret G., Mackey S. (2005). Orofacial somatomotor responses in the macaque monkey homologue of Broca’s area. Nature 435 : 1235 . 10.1038/nature03628 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Petrides M., Tomaiuolo F., Yeterian E. H., Pandya D. N. (2012). The prefrontal cortex: Comparative architectonic organization in the human and the macaque monkey brains. Cortex 48 46–57. 10.1016/j.cortex.2011.07.002 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pinker S. (1994). The Language Instinct. New York, NY: HarperCollins. [ Google Scholar ]
  • Pulvermüller F. (2005). Brain mechanisms linking language and action. Nat. Rev. Neurosci. 6 576–582. 10.1038/nrn1706 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Pulvermüller F., Shtyrov Y., Ilmoniemi R. (2003). Spatiotemporal dynamics of neural language processing: an MEG study using minimum-norm current estimates. NeuroImage 20 1020–1025. 10.1016/S1053-8119(03)00356-352 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Raposo A., Moss H. E., Stamatakis E. A., Tyler L. K. (2009). Modulation of motor and premotor cortices by actions, action words and action sentences. Neuropsychologia 47 388–396. 10.1016/j.neuropsychologia.2008.09.017 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rizzolatti G., Arbib M. A. (1998). Language within our grasp. Trends Neurosci. 21 188–194. 10.1016/S0166-2236(98)01260-1260 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rizzolatti G., Cattaneo L., Fabbri-Destro M., Rozzi S. (2014). Cortical mechanisms underlying the organization of goal-directed actions and mirror neuron-based action understanding. Physiol. Rev. 94 655–706. 10.1152/physrev.00009.2013 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sakreida K., Scorolli C., Menz M. M., Heim S., Borghi A. M., Binkofski F. (2013). Are abstract action words embodied? An fMRI investigation at the interface between language and motor cognition. Front. Hum. Neurosci. 7 : 125 . 10.3389/fnhum.2013.00125 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schwanenflugel P. J., Harnishfeger K. K., Stowe R. W. (1988). Context availability and lexical decisions for abstract and concrete words. J. Mem. Lang. 27 499–520. 10.1016/0749-596X(88)90022-90028 [ CrossRef ] [ Google Scholar ]
  • Scorolli C., Binkofski F., Buccino G., Nicoletti R., Riggio L., Borghi A. M. (2011). Abstract and concrete sentences, embodiment, and languages. Front. Psychol. 2 : 227 . 10.3389/fpsyg.2011.00227 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scorolli C., Jacquet P. O., Binkofski F., Nicoletti R., Tessari A., Borghi A. M. (2012). Abstract and concrete phrases processing differentially modulates cortico-spinal excitability. Brain Res. 1488 60–71. 10.1016/j.brainres.2012.10.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sestito M., Umiltà M. A., De Paola G., Fortunati R., Raballo A., Leuci E., et al. (2013). Facial reactions in response to dynamic emotional stimuli in different modalities in patients suffering from schizophrenia: a behavioral and EMG study. Front. Hum. Neurosci. 7 : 368 . 10.3389/fnhum.2013.00368 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shaver P., Schwartz J., Kirson D., O’connor C. (1987). Emotion knowledge: further exploration of a prototype approach. J. Pers. Soc. Psychol. 52 : 1061 . 10.1037//0022-3514.52.6.1061 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Straube B., Green A., Weis S., Kircher T. (2012). A supramodal neural network for speech and gesture semantics: an fMRI study. PLoS One 7 : e51207 . 10.1371/journal.pone.0051207 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Teramitsu I., Kudo L. C., London S. E., Geschwind D. H., White S. A. (2004). Parallel FoxP1 and FoxP2 expression in songbird and human brain predicts functional interaction. J. Neurosci. 24 3152–3163. 10.1523/jneurosci.5589-03.2004 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tramacere A., Ferrari P. F., Gentilucci M., Giuffrida V., De Marco D. (2018). The emotional modulation of facial mimicry: a kinematic study. Front. Psychol. 8 : 2339 . 10.3389/fpsyg.2017.02339 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vainiger D., Labruna L., Ivry R. B., Lavidor M. (2014). Beyond words: evidence for automatic language–gesture integration of symbolic gestures but not dynamic landscapes. Psychol. Res. 78 55–69. 10.1007/s00426-012-0475-473 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vicario C. M. (2013). FOXP2 gene and language development: the molecular substrate of the gestural-origin theory of speech? Front. Behav. Neurosci. 7 : 99 . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Vicario C. M., Newman A. (2013). Emotions affect the recognition of hand gestures. Front. Hum. Neurosci. 7 : 906 . 10.3389/fnhum.2013.00906 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Visser M., Jefferies E., Lambon Ralph M. A. (2010). Semantic processing in the anterior temporal lobes: a meta-analysis of the functional neuroimaging literature. J. Cogn. Neurosci. 22 1083–1094. 10.1162/jocn.2009.21309 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Vukovic N., Feurra M., Shpektor A., Myachykov A., Shtyrov Y. (2017). Primary motor cortex functionally contributes to language comprehension: an online rTMS study. Neuropsychologia 96 222–229. 10.1016/j.neuropsychologia.2017.01.025 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang J., Conder J. A., Blitzer D. N., Shinkareva S. V. (2010). Neural representation of abstract and concrete concepts: a meta-analysis of neuroimaging studies. Hum. Brain Mapp. 31 1459–1468. 10.1002/hbm.20950 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wicker B., Keysers C., Plailly J., Royet J. P., Gallese V., Rizzolatti G. (2003). Both of us disgusted in my insula: the common neural basis of seeing and feeling disgust. Neuron 40 655–664. 10.1016/s0896-6273(03)00679-2 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Willems R. M., Hagoort P. (2007). Neural evidence for the interplay between language, gesture, and action: a review. Brain Lang. 101 278–289. 10.1016/j.bandl.2007.03.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Yuan T. F., Hoff R. (2008). Mirror neuron system based therapy for emotional disorders. Med. Hypotheses 71 722–726. 10.1016/j.mehy.2008.07.004 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhao W., Riggs K., Schindler I., Holle H. (2018). Transcranial magnetic stimulation over left inferior frontal and posterior temporal cortex disrupts gesture-speech integration. J. Neurosci. 38 1891–1900. 10.1523/jneurosci.1748-17.2017 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

How Much Research Is Being Written by Large Language Models?

New studies show a marked spike in LLM usage in academia, especially in computer science. What does this mean for researchers and reviewers?

research papers scroll out of a computer

In March of this year, a  tweet about an academic paper went viral for all the wrong reasons. The introduction section of the paper, published in  Elsevier’s  Surfaces and Interfaces , began with this line:  Certainly, here is a possible introduction for your topic. 

Look familiar? 

It should, if you are a user of ChatGPT and have applied its talents for the purpose of content generation. LLMs are being increasingly used to assist with writing tasks, but examples like this in academia are largely anecdotal and had not been quantified before now. 

“While this is an egregious example,” says  James Zou , associate professor of biomedical data science and, by courtesy, of computer science and of electrical engineering at Stanford, “in many cases, it’s less obvious, and that’s why we need to develop more granular and robust statistical methods to estimate the frequency and magnitude of LLM usage. At this particular moment, people want to know what content around us is written by AI. This is especially important in the context of research, for the papers we author and read and the reviews we get on our papers. That’s why we wanted to study how much of those have been written with the help of AI.”

In two papers looking at LLM use in scientific publishings, Zou and his team* found that 17.5% of computer science papers and 16.9% of peer review text had at least some content drafted by AI. The paper on LLM usage in peer reviews will be presented at the International Conference on Machine Learning.

Read  Mapping the Increasing Use of LLMs in Scientific Papers and  Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews  

Here Zou discusses the findings and implications of this work, which was supported through a Stanford HAI Hoffman Yee Research Grant . 

How did you determine whether AI wrote sections of a paper or a review?

We first saw that there are these specific worlds – like commendable, innovative, meticulous, pivotal, intricate, realm, and showcasing – whose frequency in reviews sharply spiked, coinciding with the release of ChatGPT. Additionally, we know that these words are much more likely to be used by LLMs than by humans. The reason we know this is that we actually did an experiment where we took many papers, used LLMs to write reviews of them, and compared those reviews to reviews written by human reviewers on the same papers. Then we quantified which words are more likely to be used by LLMs vs. humans, and those are exactly the words listed. The fact that they are more likely to be used by an LLM and that they have also seen a sharp spike coinciding with the release of LLMs is strong evidence.

Charts showing significant shift in the frequency of certain adjectives in research journals.

Some journals permit the use of LLMs in academic writing, as long as it’s noted, while others, including  Science and the ICML conference, prohibit it. How are the ethics perceived in academia?

This is an important and timely topic because the policies of various journals are changing very quickly. For example,  Science said in the beginning that they would not allow authors to use language models in their submissions, but they later changed their policy and said that people could use language models, but authors have to explicitly note where the language model is being used. All the journals are struggling with how to define this and what’s the right way going forward.

You observed an increase in usage of LLMs in academic writing, particularly in computer science papers (up to 17.5%). Math and  Nature family papers, meanwhile, used AI text about 6.3% of the time. What do you think accounts for the discrepancy between these disciplines? 

Artificial intelligence and computer science disciplines have seen an explosion in the number of papers submitted to conferences like ICLR and NeurIPS. And I think that’s really caused a strong burden, in many ways, to reviewers and to authors. So now it’s increasingly difficult to find qualified reviewers who have time to review all these papers. And some authors may feel more competition that they need to keep up and keep writing more and faster. 

You analyzed close to a million papers on arXiv, bioRxiv, and  Nature from January 2020 to February 2024. Do any of these journals include humanities papers or anything in the social sciences?  

We mostly wanted to focus more on CS and engineering and biomedical areas and interdisciplinary areas, like  Nature family journals, which also publish some social science papers. Availability mattered in this case. So, it’s relatively easy for us to get data from arXiv, bioRxiv, and  Nature . A lot of AI conferences also make reviews publicly available. That’s not the case for humanities journals.

Did any results surprise you?

A few months after ChatGPT’s launch, we started to see a rapid, linear increase in the usage pattern in academic writing. This tells us how quickly these LLM technologies diffuse into the community and become adopted by researchers. The most surprising finding is the magnitude and speed of the increase in language model usage. Nearly a fifth of papers and peer review text use LLM modification. We also found that peer reviews submitted closer to the deadline and those less likely to engage with author rebuttal were more likely to use LLMs. 

This suggests a couple of things. Perhaps some of these reviewers are not as engaged with reviewing these papers, and that’s why they are offloading some of the work to AI to help. This could be problematic if reviewers are not fully involved. As one of the pillars of the scientific process, it is still necessary to have human experts providing objective and rigorous evaluations. If this is being diluted, that’s not great for the scientific community.

What do your findings mean for the broader research community?

LLMs are transforming how we do research. It’s clear from our work that many papers we read are written with the help of LLMs. There needs to be more transparency, and people should state explicitly how LLMs are used and if they are used substantially. I don’t think it’s always a bad thing for people to use LLMs. In many areas, this can be very useful. For someone who is not a native English speaker, having the model polish their writing can be helpful. There are constructive ways for people to use LLMs in the research process; for example, in earlier stages of their draft. You could get useful feedback from a LLM in real time instead of waiting weeks or months to get external feedback. 

But I think it’s still very important for the human researchers to be accountable for everything that is submitted and presented. They should be able to say, “Yes, I will stand behind the statements that are written in this paper.”

*Collaborators include:  Weixin Liang ,  Yaohui Zhang ,  Zhengxuan Wu ,  Haley Lepp ,  Wenlong Ji ,  Xuandong Zhao ,  Hancheng Cao ,  Sheng Liu ,  Siyu He ,  Zhi Huang ,  Diyi Yang ,  Christopher Potts ,  Christopher D. Manning ,  Zachary Izzo ,  Yaohui Zhang ,  Lingjiao Chen ,  Haotian Ye , and Daniel A. McFarland .

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition.  Learn more . 

More News Topics

IMAGES

  1. Body Language Research by Mariela Lugo

    body language research paper

  2. Expository essay: Essay on body language

    body language research paper

  3. Body Language Training

    body language research paper

  4. Sign Language Research Paper

    body language research paper

  5. This Body Language Infographic Shows You What People MEAN

    body language research paper

  6. Body Language Presentation

    body language research paper

VIDEO

  1. The Power of Body Language

  2. Rosie 1,24 2

  3. G20 body language: Reading between the lines

  4. Common Types of Research Papers for Publication

  5. What You Don't Know About Psychopaths and Sleepwalking

  6. Body language test #motivation #sakthispeech #thf #sakthiinspires

COMMENTS

  1. Unspoken science: exploring the significance of body language in

    While the focus is often on the content of research papers, lectures, and presentations, there is another form of communication that plays a significant role in these fields: body language. ... Research suggests that non-verbal communication constitutes a substantial portion of human interaction, often conveying information that words alone ...

  2. Body language in the brain: constructing meaning from expressive

    Abstract. This fMRI study investigated neural systems that interpret body language—the meaningful emotive expressions conveyed by body movement. Participants watched videos of performers engaged in modern dance or pantomime that conveyed specific themes such as hope, agony, lust, or exhaustion. We tested whether the meaning of an affectively ...

  3. A Review of Communication, Body Language and ...

    PDF | On Apr 22, 2021, Gizem Öneri Uzun published A Review of Communication, Body Language and Communication Conflict | Find, read and cite all the research you need on ResearchGate

  4. Body Language Analysis in Healthcare: An Overview

    More extensive research is needed using artificial intelligence (AI) techniques in disease detection. This paper presents a comprehensive survey of the research performed on body language processing. Upon defining and explaining the different types of body language, we justify the use of automatic recognition and its application in healthcare.

  5. Understanding Body Language Does Not Require Matching the Body's

    It has been suggested that processing body posture information is partly an abstract ability and so involves more than just visual perception (Tipper, Signorini, & Grafton, 2015).Therefore, it is not surprising that BL processing triggers several areas of brain activation (de Gelder, 2006).Suggested neural correlates of BL processing (e.g., de Gelder, 2006) seem to involve three interconnected ...

  6. (PDF) Language and Body Language

    PDF | On Apr 6, 2018, Vijendra Pratap Singh published Language and Body Language | Find, read and cite all the research you need on ResearchGate

  7. Online Communication and Body Language

    Introduction. The COVID-19 emergency brought out the role of online digital technologies. The increase in online social interactivity was accelerated by social distancing, which has been recognized to have adverse effects due to physical and emotional isolation (Canet-Juric et al., 2020). Body language is central to social interactions, and its role is clearly diminished when going online, but ...

  8. Body Language: An Effective Communication Tool

    Body Language is a significant aspect of modern communications and relationships. Body language describes the method of communicating using body movements or gestures instead of, or in addition to, verbal language. The interpretation of body language, such as facial expressions and gestures, is formally called kinesics. Body language includes ...

  9. An Analysis of Body Language of Patients Using Artificial Intelligence

    2.2. Body Language Analysis in Communication. In research from [], body language is a kind of nonverbal communication.Humans nearly exclusively transmit and interpret such messages subconsciously. Body language may provide information about a person's mood or mental condition.

  10. 5199 PDFs

    Explore the latest full-text research PDFs, articles, conference papers, preprints and more on BODY LANGUAGE. Find methods information, sources, references or conduct a literature review on BODY ...

  11. [2308.08849] A Survey on Deep Multi-modal Learning for Body Language

    View a PDF of the paper titled A Survey on Deep Multi-modal Learning for Body Language Recognition and Generation, by Li Liu and 5 other authors. View PDF Abstract: Body language (BL) refers to the non-verbal communication expressed through physical movements, gestures, facial expressions, and postures. It is a form of communication that ...

  12. PDF Body Language: An Effective Communication Tool

    %PDF-1.6 %âãÏÓ 49 0 obj > endobj xref 49 22 0000000016 00000 n 0000000963 00000 n 0000001026 00000 n 0000001228 00000 n 0000001349 00000 n 0000042972 00000 n 0000043159 00000 n 0000066971 00000 n 0000067163 00000 n 0000087565 00000 n 0000087759 00000 n 0000088744 00000 n 0000088912 00000 n 0000089902 00000 n 0000090075 00000 n 0000101809 00000 n 0000102007 00000 n 0000102993 00000 n ...

  13. Body Language: An Effective Communication Tool

    46. 1 Excerpt. Body Language is a significant aspect of modern communications and relationships. Body language describes the method of communicating using body movements or gestures instead of, or in addition to, verbal language. The interpretation of body language, such as facial expressions and gestures, is formally called kinesics.

  14. Comprehending Body Language and Mimics: An ERP and Neuroimaging ...

    In this study, the neural mechanism subserving the ability to understand people's emotional and mental states by observing their body language (facial expression, body posture and mimics) was investigated in healthy volunteers. ERPs were recorded in 30 Italian University students while they evaluated 280 pictures of highly ecological displays of emotional body language that were acted out by ...

  15. Frontiers

    This fMRI study investigated neural systems that interpret body language—the meaningful emotive expressions conveyed by body movement. Participants watched videos of performers engaged in modern dance or pantomime that conveyed specific themes such as hope, agony, lust, or exhaustion. We tested whether the meaning of an affectively laden performance was decoded in localized brain substrates ...

  16. Body Language Analysis in Healthcare: An Overview

    More extensive research is needed using artificial intelligence (AI) techniques in disease detection. This paper presents a comprehensive survey of the research performed on body language processing. Upon defining and explaining the different types of body language, we justify the use of automatic recognition and its application in healthcare.

  17. Understanding Body Language

    Video: Datta lab. It might not rival Newton's apple, which led to his formulating the law of gravity, but the collapse of a lighting scaffold played a key role in the discovery that mice, like humans, have body language. Harvard Medical School scientists have developed new computational techniques that can make sense of the bodily movements ...

  18. Exploring Non-Verbal Communication and Body Language in ...

    Methodologically, a hermeneutical research paradigm is used, with Merkel being purposefully sampled as the subject of research. ... Body language constitutes a central area of personal experience, of communicative events and of human development and is an interaction of general human characteristics, experience, personal peculiarities, and ...

  19. The Power of Body Language in Education: A Study of ...

    body language; this paper will describe som e of the most notable findings. ... The research examined anxiety, foreign language anxiety, and the primary functions of writing anxiety. To view this ...

  20. Language, Gesture, and Emotional Communication: An Embodied View of

    Abstract. Spoken language is an innate ability of the human being and represents the most widespread mode of social communication. The ability to share concepts, intentions and feelings, and also to respond to what others are feeling/saying is crucial during social interactions. A growing body of evidence suggests that language evolved from ...

  21. PDF Body Language as a Communicative Aid amongst Language Impaired Students

    The research is significant in many ways. First, it will help in bridging the gap between disability and language ... Body language includes subtle, unconscious movements, including winking and slight movements of the eye brows and other facial expressions.' Patel (2014) further elaborates on body language in these words: 'Different ...

  22. PDF The Silent Language of Leaders: How Body Language Can Help--or Hurt

    Body language is the management of time, space, appearance, posture, gesture, vocal prosody, touch, smel , facial expression, and eye contact. The latest research in neuroscience and psychology has proven that body language is crucial to leadership effectiveness—and this book wil show you

  23. Body language: An effective communication tool

    As mentioned by Patel [2] different researches on body language reported that only 7% of the information human transmits to others is in the language we use; 38% of communication is in the quality ...

  24. PDF The Benefits of Using Effective Body Language in Public Speaking

    Using effective body language to connect with your audience can help you deliver a successful speech. For example, making eye contact, smiling, and using gestures can help you establish a connection with your audience and make them feel more engaged with your speech (Courtney and Smallwood, 2020). 2.

  25. How Much Research Is Being Written by Large Language Models?

    That's why we wanted to study how much of those have been written with the help of AI.". In two papers looking at LLM use in scientific publishings, Zou and his team* found that 17.5% of computer science papers and 16.9% of peer review text had at least some content drafted by AI. The paper on LLM usage in peer reviews will be presented at ...