AU7 + AU20 + AU26
In the process of human information interaction, speech is the most common way for people to communicate. As the most basic audiovisual signal, speech cannot only identify different vocalists, but also effectively distinguish different emotional states. International research on emotional speech focuses on the analysis of acoustic features of emotions, such as rhythm, sound source, resonance peaks and spectrum and so on ( Albanie et al., 2018 ). In recent years, deep learning has been widely studied and has many applications in speech emotion computation. Dongdong Li proposed a bidirectional long short-term memory network with directed self-attention (BLSTM-DSA). Long Short Term Memory (LSTM) neural networks can learn long-term dependencies from learned local features. In addition, Bi-directional Long Short-Term Memory(Bi-LSTM) can make the structure more robust through the direction mechanism, and the direction analysis can better identify the hidden emotions in sentences. Also, the autocorrelation of speech frames can be used to deal with the problem of missing information, thus introducing a self-attention mechanism in Speech Emotion Recognition (SER). When evaluated on the Interactive Emotional Binary Motion Capture (IEMOCAP) database and the Berlin Emotional Speech Database (EMO-DB), BLSTM-DSA achieves a recognition rate of over 70% for each algorithm on the speech emotion recognition task ( Li et al., 2021 ).
Human posture often carries emotional information during interaction. Researchers have combined human posture with artificial intelligence to quantitatively assess the external representation of a person’s mental state in the face of different situations through a series of movement and body information capture devices. For example, the intelligent seat is applied to the driver’s seat of the vehicle to dynamically monitor the emotional state of the driver and give timely warnings. Some scientists in Italy also conduct automatic emotional analysis on office staff through a series of posture analysis to design a more comfortable office environment.
Electroencephalographic(EEG) is a graph obtained by amplifying and recording the spontaneous biological potential of the brain from the scalp through precise electronic instruments. It has been widely used in the field of emotion recognition. The DEAP dataset used to study human emotional states ( Luo et al., 2020 ), recording EEG and peripheral physiological signals from 32 participants watching 40 one-minute long music video clips. Participants rated each video according to arousal, potency, like/dislike, dominance, and familiarity. Correlations between EEG signal frequencies and participants’ ratings were investigated by emotional label retrieval, and decision fusion was performed on classification results from different modalities. The experiments obtained an average recognition rate of up to 84.2% and up to 98% by identifying a single emotional state, while for two, three and four emotions, the average recognition rate was up to 90.2, 84.2, and 80.9%, respectively. Table 3 shows the validated classification accuracy of the DEAP dataset based on different recognition models ( Khateeb et al., 2021 ).
Classification accuracy of deap dataset based on different recognition models.
Stimulus | Classifier | Emotions | Subjects | Accuracy |
Video | GELM | 4 | 32 | 69.67 |
Audio | MLP | 4 | 30 | 78.11 |
Video | Nearest neighbour | 4 | 32 | 73.62 |
Video | Domain-adaptation | 5 | 14 | 39.05 |
Video | SVM | Valence-dominance | 10 | 63.04 |
Video | K-NN | 2 | 30 | 69.50 |
Our research group has also carried out relevant research on multimodal affective computing, and has a patent for automatic diagnosis of depression based on speech and facial expression: By combining facial gesture features, we propose a new double dictionary idea with gesture robustness. In 2016, feature extraction and evaluation of depressed speech were performed, and in the following year, we proposed to use the change of expression of depressed patients as one of the evaluation indicators to determine whether they suffer from depression as well. Figures 2 and and3 3 shows the data.
Speech emotion recognition rate.
Face facial emotion recognition rate.
In 2018, a new automatic depression assistant discrimination algorithm integrating speech and facial expression was proposed. Firstly, the signal enhancement was performed for depressed speech; the fundamental frequency and the first three resonance peaks features were extracted by the inverse spectral method, and the energy, short-time average amplitude and Mel-Frequency Ceptral Coefficients(MFCC) features were extracted; the speech recognition model and the facial expression recognition model were established to assist in judging whether a person has depression; finally, the Adaboost algorithm based on back propagation(BP) neural network was proposed and validated in a practical situation for an automatic depression-assisted detection system. As Table 4 shown, the recognition rate of the depression detection algorithm based on fused speech and facial emotion reached 81.14%. The development of artificial intelligence provides a more objective judgment basis for the diagnosis of depression in psychological medical health, which has cutting-edge and application value ( Zhao et al., 2019d ).
The integration of voice and facial expression recognition rate.
Speech recognition results (%) | Facial expression recognition results (%) | After fusion recognition results (%) | |
Before the speech signal enhancement | 62.4 | 85.5 | 75.75 |
Enhanced speech signal | 78.8 | 85.5 | 82.29 |
Affective computing is a combination of computational science with physiology science, psychological science, cognitive science and other disciplines. Based on the common cognition and knowledge structure of human on different emotional expressions, it studies the emotions in the process of human-human interaction and human–computer interaction, and guides the design of artificial intelligence with emotion recognition and feedback functions, understands human emotional intentions and makes appropriate responses to achieve human–computer emotional interaction.
Extensive research on musical emotions suggests that music can trigger emotional activity in listeners. Scientists believe that when a person is in a beautiful and pleasant musical environment, the body secretes an active substance that is beneficial to health and helps eliminate psychological factors that cause tension, anxiety, depression and other adverse psychological states ( Rahman et al., 2021 ). People’s preference for different kinds of music is not without rules, after psychological cognition and data test, there is a precise music signal α value can measure the ear-pleasant degree. The closer the music signal α is to the value 1, the better it sounds. The value of α also can be obtained by artificial intelligence ( Banerjee et al., 2016 ). This shows that people’s psychological state toward music can be judged by machines, and further research can be based on this law to simulate good-sounding music in line with public aesthetics and realize the interaction between emotions and machines.
As Figure 4 , a team of researchers from the University of Reading and the University of Plymouth in the UK developed and evaluated an affective brain-computer music interface (aBCMI) for detecting a user’s current emotional state and attempting to modulate it by playing music generated by a music composition system based on specific emotional goals.
The proposed affective brain-computer music interface (aBCMI). The system consists of five key elements: (A) . The user of the system (B) . The user’s physiological signal acquisition module (including the electroencephalogram (EEG), electrocardiogram (ECG) and respiration rate) (C) . An emotional state detection system for identifying a current emotional state that a user is experiencing (D) . A case-based reasoning system that determines how a user moves from his current emotional state to a new target emotional state (E) . The music generator is used to play music for the user. The case-based reasoning system identifies the most appropriate emotional trajectory and moves them to the target emotional state.
The affective state detection method achieved statistically significant online single-trial classification accuracy in classifying user potency in seven-eighths of participants and in classifying user arousal in three-eighths of participants. The mean accuracy for affective state detection was 53.96% (chemotaxis) and 53.80% (arousal) ( Daly et al., 2016 ). The experimental data also demonstrate that the aBCMI system is able to detect the emotional states of most of the participants and generate music based on their emotional states to achieve “happy” and “calm” mental states. By visualizing abstract mental states, extracting features from changes in emotional states, and quantifying different emotions in different musical environments, the aBCMI system can effectively characterize and provide feedback to regulate current emotional states, realizing the combination of psychology and artificial intelligence.
Musical emotion regulation aims to record physiological indicators from users with a signal acquisition component in order to capture the cognitive and physiological processes associated with their current affective state. Features are extracted from the physiological signals that most likely correspond to changes in the user’s affective state. Then the case-based reasoning system is used to determine the best method to transfer them to the target emotional state, so as to move the user to the target emotional state.
Dapeng Li and Xiaoguang Liu have also combined incremental music teaching methods to assist therapy. The combination of contextual teaching and artificial intelligence attention theory makes the assisted treatment system more targeted. The design of treatment content more fully takes into account the patient’s actual situation. When designing the music teaching-assisted treatment context, the physician will fully consider various factors of the patient, from the perspective of mobilizing the patient’s interest in the music learning work, to achieve the full activity of brain neurons and more fully access the pathological information around the lesion to promote autoimmunity and subsequent treatment ( Li and Liu, 2022 ).
The evocation of musical emotions is based on functional connections between sensory, emotional and cognitive areas of the brain, including subcortical reward networks common to humans and other animals, such as the nucleus accumbens, amygdala and dopaminergic systems, as well as the evolutionary end of the cerebral cortex with complex cognitive functions. Musical emotions regulate the activity of almost all limbic and paralimbic structures of the brain. Music can induce different emotions, and we can also use music emotions to guide the development of artificial intelligence. Further research is expected in such fields as music generation, education, medical treatment and so on.
Through systematic analysis and application examples, this manuscript points out that the artificial intelligence system combined with cognitive psychology is the development direction of artificial intelligence: to promote the development of artificial intelligence, to give computers the ability to simulate human’s advanced cognition, and to learn and think, so that computers can recognize emotions and understand human feelings, and finally realize dialog and empathy with human beings and other artificial intelligence. Artificial intelligence with human psychological cognition cannot only simulate the rational thinking of “brain,” but also reproduce the perceptual thinking of “heart,” and can realize the emotional interaction between people and machines, machines and machines, similar to human communication.
Nowadays, the theory of artificial intelligence based on cognitive psychology also has imperfections: due to the differences in race, region and growth environment, the evaluation criteria for each subject are not completely consistent, and the random sampling difference is even greater Moreover, mental activities are generally ambiguous and chaotic.
The future interdisciplinary combination of AI and psychology will focus on the following aspects: big data medical, human–computer interaction, brain-computer interface, general artificial intelligence and so on. Through the combination of cognitive science in psychology and AI, breakthroughs in many aspects will be achieved based on multimodal data and extraction of high-dimensional data. The two accomplish each other, complementing each other and developing together.
This manuscript provides a research direction for the development of artificial intelligence to simulate machines with human emotions and to realize human–computer interaction. It has the characteristics of cutting-edge science, which is not only of great theoretical significance, but also has good development potential and application prospects. It is hoped that it can provide research basis for follow-up researchers.
JZ formulated the research manuscript idea, provided substantial edits to the manuscript and final draft, and aided in the interpretation of the manuscript. MW wrote the main body of the manuscript, participated in revisions, and submitted the final manuscript. LZ contributed to the formulation of the research manuscript idea, provided substantial edits to the manuscript and the final draft, and aided in the interpretation of the manuscript. XW and JJ participated in the conception of the idea and revised the manuscript. All authors contributed to the article and approved the submitted version.
This work was supported by National Natural Science Foundation of China: 12071369 and Key Research and Development Program of Shaanxi (No. 2019ZDLSF02-09-02).
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
In order to continue enjoying our site, we ask that you confirm your identity as a human. Thank you very much for your cooperation.
IMAGES
VIDEO
COMMENTS
CASE PRESENTATION. A 55-year-old right-handed woman presented with a 3-year history of cognitive changes. Early symptoms included mild forgetfulness—for example, forgetting where she left her purse or failing to remember to retrieve a take-out order her family placed—and word-finding difficulties.
HM's case study is one of the most famous and important case studies in psychology, especially in cognitive psychology. It was the source of groundbreaking new knowledge on the role of the hippocampus in memory. Background Info "Localization of function in the brain" means that different parts of the brain have different functions ...
Monson, C. M. & Shnaider, P. (2014). Treating PTSD with cognitive-behavioral therapies: Interventions that work. Washington, DC: American Psychological Association. Updated July 31, 2017. Date created: 2017. This case example explains how Jill's therapist used a cognitive intervention with a written worksheet as a starting point for engaging in ...
H.M. soon became a major case study of interest for psychologists and neuroscientists who studied his memory deficits and cognitive abilities to better understand the hippocampus and its function. When H.M. died on December 2, 2008, at the age of 82, he left behind a lifelong legacy of scientific contribution.
Cognitive psychology is the scientific study of the mind as an information processor. It concerns how we take in information from the outside world, and how we make sense of that information. Cognitive psychology studies mental processes, including how people perceive, think, remember, learn, solve problems, and make decisions.
Cognitive psychology is the study of internal mental processes—all of the workings inside your brain, including perception, thinking, memory, attention, language, problem-solving, and learning. Learning about how people think and process information helps researchers and psychologists understand the human brain and assist people with ...
The present case study illustrates how a family cognitive-behavioral therapy (FCBT) was used to treat a 9-year-old girl diagnosed with separation anxiety disorder and agoraphobia without panic. The first half of treatment focused on teaching specific coping skills, whereas the second half consisted of exposure tasks that provided opportunities ...
Abstract. Schwartz & Dell (2010) advocated for a major role for case series investigations in cognitive neuropsychology. They defined the key features of this approach and presented a number of arguments and examples illustrating the benefits of case series studies and their contribution to computational cognitive neuropsychology.
Despite our advances in functional brain imaging the study of clinical cases in neuropsychology is still important to determine the causal role of certain brain regions in contributing to a given cognitive process. Much of what we know about the brain systems underlying perception, attention, memory, and language has been first derived from ...
Olfactory memory: a case study in cognitive psychology J Psychol. 1996 May;130(3):309-19. doi: 10.1080/00223980.1996.9915012. Author J M Annett 1 ... leading to doubts among some critics of cognitive psychology regarding the usefulness of the modern information-processing approach. In particular, current cognitive models of memory address ...
A Case Example: Nanry I. 207 she felt sad all the time, felt discouraged about the future, felt guilty all the time, was self-critical, cried often, had difficulty making decisions, had difficulty getting anything done, and had early morning awaken- ings. Her total BDI score was 21, indicating a moderate level of depres- sive symptoms.
A case study is an in-depth study of one person, group, or event. In a case study, nearly every aspect of the subject's life and history is analyzed to seek patterns and causes of behavior. Case studies can be used in many different fields, including psychology, medicine, education, anthropology, political science, and social work.
Cognitive Psychology: Experiments & Examples. Cognitive psychology reveals, for example, insights into how we think, reason, learn, remember, produce language and even how illogical our brains are. Fifty years ago there was a revolution in cognitive psychology which changed the way we think about the mind. The 'cognitive revolution ...
A classic study in cognitive psychology found that participants in a study could only recall 10% of random three-letter strings after 18 seconds. After 3 seconds, the participants could recall 80% ...
6. Stanford Prison Experiment. One of the most controversial and widely-cited studies in psychology is the Stanford Prison Experiment, conducted by Philip Zimbardo at the basement of the Stanford psychology building in 1971. The hypothesis was that abusive behavior in prisons is influenced by the personality traits of the prisoners and prison ...
Her more recent episodes related to her parents' marital problems and her academic/social difficulties at school. She was treated using cognitive-behavioral therapy (CBT). Chafey, M.I.J., Bernal, G., & Rossello, J. (2009). Clinical Case Study: CBT for Depression in A Puerto Rican Adolescent. Challenges and Variability in Treatment Response.
One notable example is Freud's study on Little Hans. This case study explored a 5-year-old boy's fear of horses and related it back to Freud's theories about psychosexual stages. Another classic example is Genie Wiley (a pseudonym), a feral child who was subjected to severe social isolation during her early years.
Summary. The case study approach has a rich history in psychology as a method for observing the ways in which individuals may demonstrate abnormal thinking and behavior, for collecting evidence concerning the circumstances and consequences surrounding such disorders, and for providing data to generate and test models of human behavior (see Yin ...
Source: By Henry Jacob Bigelow; Ratiu et al. Phineas Gage. In 1848, John Harlow first described the case of a 25-year-old railroad foreman named Phineas Gage. Gage was a "temperate" man ...
Case studies are in-depth investigations of a person, group, event, or community. Typically, data is gathered from various sources using several methods (e.g., observations & interviews). The case study research method originated in clinical medicine (the case history, i.e., the patient's personal history). In psychology, case studies are ...
Cognitive psychology is mainly a psychological science that studies the advanced mental processes of human cognition, including the degree of thinking, deciding, reasoning, motivation and emotion. ... In 2017, in the case study of deepmind team, the research team selected "shape preference" as the entry point for detecting neural networks ...
An Introduction to Cognitive Bias. The word bias means a prejudice in favor or against a specific person, place, or thing. When we speak of cognitive bias, we are talking more specifically about a ...
This document provides a case study analysis in cognitive psychology. It summarizes key aspects of cognitive psychology evident in the case, including biopsychology, attention, language processing, memory, and the relationship between brain structures and mental processes. The case involves a student ("SDM") who has difficulties with written expression, processing speed, and other cognitive ...