Week_09: Draft of Literature Review

The Application of Facial Expression and Micro-expression in Virtual Character Emotional Expressions

Draft of Literature Review

Introduction:

Emotion can be an important part of performance animation because it can build up the relationships between characters and the audience. Traditionally, performance animations are based on references and regular patterns, while ignoring the behavioral and psychological exploration of the relationship among emotions and facial expressions. Such method may weaken the expression of emotional details. Based on subjects such as psychology and evolutionary-biology, scientists believe that facial expressions and micro-expressions have a certain connection to human emotions, and this connection can be decoded in a scientific way. Their research provides basis for virtual characters realistic emotional expressions. This paper will base on recent research on facial expressions, micro-expressions and animation techniques, analyzing real cases in film, animation and practices, and try to find an artistic-scientific blended way to build character emotions in performance animations.

Literature review:

Key words: Facial expressions, Emotions, Animation.

Traditional performance animation:

According to Norman McLaren, “Animation is not the art of drawings that move but the art of movements that are drawn”. 2d and 3d animation can affect humans and their emotions significantly. Traditionally, animators animate performance animation based on references or their experience of observing human behaviors. Jason Osipa shares a large number of computer facial animation techniques in his book Stop Starring. The book covers variable kinds of facial expression techniques that animators used in their productions, which also covers techniques on emotion expressions (Osipa, 2010). However, these principles mostly base on observations but ignored the research on scientific level. Moreover, in certain emotional situations like hiding true emotions or deception, it is difficult to find patterns of people’s emotional changes or even represent them through traditional methods. However, some scientific or psychological research can provide materials of these specific emotional expressions for animators. Therefore, human psychology and other science research can be a guidance to help animators to develop correct animations for their audiences to connect with.

The scientific analysis development of facial expression and micro-expressions:

The research on the relationship among emotions, facial expressions and micro expressions has lasted for decades. Several scholars’ developments have been considered as the foundation of this area, and part of them has already been applied into film and animation industry.

Facial expression:

The research on facial expressions started by Charles Darwin and then refined by other scholars. Neuropsychological studies point out the asymmetry of facial expressions, which means that the two sides of human face are not performed symmetrically when emotional expressions occurs. Additionally, scientists realized that emotions can be recognized more easily on the left part of human face. They mentions that socially appropriate signals are clearly visible on the right face, while personalized signals are visible on the left face (Mandal and Awasthi, 2015, p. 274). Other scientists, for example, Ekman and Friesen, developed the universality thesis of facial expressions, which refers to accurate recognition of facial expressions across cultures at better-than-chance levels (Ekman et al. 1987). Ekman also proposed the idea of six basic emotional expressions, which has been accepted by psychologists: happiness, sadness, anger, fear, surprise and disgust (Russell and José Miguel Fernández-Dols, 2002, p. 11). However, it is still being criticized by some cross-cultural studies on emotional facial expressions (Russell, 1994).

Some of scientists then developed several automated system for the recognition of facial expressions. For example, electromyography and electroencephalogram (Mandal and Awasthi, 2015, p. 9). These systems fit the requirement of collecting experiment data and scientific research. However, they are not good guidance for animators. With the development of anatomically based coding systems, however, animators start to have more resources for analyzing facial expressions. Hjortsjö’s Mimic language can be considered as one of the earliest explorations of facial muscular activities systematization. The mimicry covers the additional expressive movements of gestures and postures, which are the characteristic manifestations of emotional states (Hjortsjö, 1970). Hjortsjö also described the facial expressions which are related to twenty four emotions, and divided these expressions into eight categories (Hjortsjö, 1970).

Another important development of anatomically based coding systems is Facial Action Coding System (FACS). FACS is developed by Paul Ekman and Wallace Friesen in 1978. They breaks down facial actions into small parts, which is called action units (AUs). Each of them can be considered as the basic elements of facial expressions. With the combination of different AUs, people can make different kinds of facial expressions based on muscle movements. At first facial action coding system is designed for motion records (Ekman, Friesen and Hager, 2002). However, it has now been widely applied into film industry and computer animation for decades (Parke and Waters, 2020, p. 33).

Micro-expressions:

Micro-expression is considered as a kind of more typical facial expression of emotion. The definition of micro-expression is still not clear. Mark G. Frank and Elena Svetieva tend to define micro-expression as any expression of emotion that is shown at 0.5 s or less because previous research had suggested that most of spontaneous expressions of emotion lasts between 0.5 and 4 (or 5) seconds (Mandal and Awasthi, 2015, p. 229). A distinctive feature of micro-expressions is that they reveal true emotional states in the form of facial expression for a very short period of time (0.5s or less), which is then quickly disguised or suppressed by another one.

The relationship between micro-expression and deception has been analyzed by some scholars. Some of the research focusing on the difference between real smile and fake smile. Scientists found that the main difference between real smile and fake smile is the muscle movements around eye area (Duchenne, 1990). Although muscles on the mouth corners are pulled up in both of these kinds of smiles, only the real smile will trigger the movements of orbicularis oculi muscle around the eyes (Duchenne, 1990). Additionally, according to DePaulo’s research, the facial pleasantness of liars is much lower than normal level. Liars will have more chin raises, more lip pressing, and look more nervous (DePaulo et al. 2003). However, other facial expressions like smiling, eyebrow lowering or raising have not shown consistently significant effect sizes (DePaulo et al. 2003).

References:

Carl-Herman Hjortsjö (1970). Man’s face and mimic language. Studentlitteratur.

DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). “Cues to deception”, Psychological Bulletin, pp. 74–118.

Duchenne, G. B. (1990). The Mechanism of Human Facial Expression. Edited by R. A. Cuthbertson. Cambridge: Cambridge University Press (Studies in Emotion and Social Interaction). doi: 10.1017/CBO9780511752841.

Ekman, P., Friesen, W.V. and Hager, J.C. (2002). Facial action coding system. Salt Lake City: Research Nexus.

Ekman, P., Friesen, W. V., O’Sullivan, M., Chan, A., Diacoyanni-Tarlatzis, I., Heider, K., et al. (1987). “Universals and cultural differences in the judgments of facial expressions of emotion”, Journal of Personality and Social Psychology, 53(4), pp. 712–717.

Mandal, M.K. and Awasthi, A. (2015). Understanding Facial Expressions in Communication: Cross-cultural and Multidisciplinary Perspectives. New Delhi: Springer India.

Osipa, J. (2010). Stop staring facial modeling and animation done right. Indianapolis, Ind Sybex.

Parke, F.I. and Waters, K. (2020). Computer facial animation. Boca Raton: Crc Press.

Russell, J. A. (1994). “Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies”, Psychological Bulletin, pp. 102–141.‌

Russell, J.A. and José Miguel Fernández-Dols (2002). The psychology of facial expression. Cambridge: Cambridge University Press.

Collaborative Unit_Submission post

SHUT DOWN-The Animation Film:

Personal Works:

The video below summarizes my works in this team, hope you guys like it.

My role in the team:

Story: Initial story design. the final story is made by all the team members.

Rigging: Character rig, robot rig, newspaper rig and teapot rig.

Texturing and shading: Unfolding UVs for characters and scene; Textures for the whole scene (livingroom, kitchen, floor, wall), part of it has been replaced by Wanxuan Liu’s works (livingroom). The toon shader adjustment.

Lighting: The scene lighting (livingroom, kitchen)

VFX: The cloth simulation and water simulation.

Rendering: Rendering for part of the characters and scene.

Sound: small part of the sound in video (Cello & Electronic), most of sounds are made by Wanxuan Liu.

Hyberlinks:

The process:

Week_01: Intro to the unit and projects

Week_02: Collaboration Unit Project Brief

Week_02: Texturing test & Model Rigging

Week_03: Character rigging

Week_04: Texturing

Week_05: Refine Texturing & Improve the scene

Week_06: Lighting &Sim& Refine Texturing

Week_07: Simulation & Rendering

Week_08: Rendering & Simulation

Week_09: Rendering & Project Summary

Other links:

Week_02: Summary

Week_05: Seminar and 1-1 support

Week_09: Rendering & Project Summary

This week we rendered the last few sequences of our animation and finished the editing and compositing. After compositing we disgussed the sound and music. I finally added some cello and electronic sound for our video, while Wanxuan finished most of them.

The final result of our animation will be put in the next post. Here I just make a summary for our project and review some of our works.

Project summary:

Team members and roles:

Zhengzhong Liang: Script writing; Character Modeling (old man & robot); Texturing and Shading (robot); Lighting; Rendering; Compositing.

Ziyin Wang: Script writing; Layout; Animation (all shots)

Wanxuan Liu: Script writing; Storyboard; Character Design; Texturing and Shading (old man, part of livingroom); Lighting; Rendering; Sound and Editing.

Guanze Wu: Script writing; Rigging (old man, robot, props); Texturing and Shading (scene: kitchen & livingroom); Lighting; VFX (cloth & water sim); Rendering.

Yuehui Chen: Character textures refinement.

Script and Storyboard

At first, we decided to make a cel-shading scifi background animation film, but we were not sure what to tell in the story. Therefore, we wrote the script individually and shared our scripts together. After disgussion, we finally chose one of the script and spent some time to refine it.

Here is the initial script:

In a near-future, robot is widely used in people’s daily life. A computer virus explosion lead to a chaotic, which strains the relationship between the oversensitive protagonist and his suspected robot. A misunderstanding drives the protagonist to fight with his suspected robot, and died accidentally in this fighting. The story finally shows that the robot is not affected by the virus at all. 

Then Wanxuan drew the storyboard. We disgussed about the storyboard later and realized that part of the camera in the storyboard need to be adjusted. In order to understand camera movements, we shoot a video as a reference for our layout and camera settings.

Modeling

We evaluated the modeling tasks in this project and decide to model the character and robot by ourselves, and other objects in the room would be collected from website.

The character and robot were modeled by Zhenzhong, other models are searched by Wanxuan and Ziyin. The design and modeling of robot went smoothly. However, we met some problem when we designing the character. Wanxuan finally drew a concept art for the character, which solved this problem.

After that, Zhengzhong quickly modeled the character. He then gave these models to me.

Rigging

I did all the rigging after Zhengzhong sent these models to me. I rigged the character and robot at first, all the rigging is rigged without any script or plugins. After that I realized that we needed a newspaper which can be folded and curved. What’s more, a rigged pot is required in this animation, too. Therefore, I modeled, textured and rigged the newspaper and pot. Unfortunately, the newspaper didn’t work well in the animation. This is totaly my fault.

Texturing

After modeling and texturing, we started to texturing our characters and the scene. We invited Yuehui Chen, who studies character animation in CSM, helping us to do some texture stuff. Wanxuan finished the first version of chracter textures and Zhengzhong finished the robot textures. I did the first version of scene textures. After that part of the furnatures inside the room were replaced by new models. Therefore, I asked Wanxuan to help me finish these parts of textures, while Zhengzhong and I keep refining other textures.

Layout

When we were making textures, Ziyin started build the layout. We were satisfied with part of the camera in the first version of the layout. However, the other part of it still had some problems. THerefore, Ziyin did some research on camera shot and then finished the second version of layout. The second version looks much better and we decided to animate based on this one.

Lighting

We then started to lighting the scene. Zhengzhong, Wanxuan and I finished the scene lighting. I did the first version of the lighting, while Zhengzhong, Wanxuan refined part of it. Me and Zhengzhong also figured out how to make the proper AOVs for toon shading. After that we rendered the scene and test the rendered layers inside After Effect.

Animation

Ziyin did all the animations by himself, which can be considered as a huge work. He did neary 40 shots of animations. He was quite strict with himself and even remade part of the animations. Thanks for his works, our characters starts moving and the scene becomes much more interesting.

Rendering, VFX and Compositing

In the next few weeks we started to render animations, while Ziyin kept animating other unfinished animations. Me and Wanxuan rendered animation layers, while Zhengzhong dealed with the compositing. Since some of the shots requires water simulation and cloth simulation, I used Houdini simulated these visual effects. I then send these files to Zhengzhong so that he could make layers for the effect and the scene.

Sound and Editng

Unfortunately we didn’t have time to ask sound art students to help use make sounds and music. Therefore, we decided to finnish this part by ourselves. Wanxuan finished the editing of the whole animation and the sound. The time was so limited that we only have chance to make the first version of animation. Though part of the animations can be improved into another level, the final result was still satisfied to us.

Reflections:

As a student who used to learn industry design, it is my first time to collaborate with other students to make an animation short film, which is a great experience to me. Since the time is quite limited, we don’t have enough time to refine some of the details inside this animation. I am quite happy with the result, but I think we can do it better if we have more time. Also, I learned a lot in this project. For example, how to solve new problems, how to keep the project rolling and how to overcome difficulties. It is quite struggling in part of the process but we finally achieved what we want. Also, I realize that my rigging and simulation is still not good enough. I need to spend more time to learn these things and improve my skills in the future.

Week_08: Rendering & Simulation

This week we still focusing on rendering and compositing.

What I did this week is almost the same as the one that I did last week. I finished the other part of the simulation and rendered other animations.

When I did the simulation for the dream sequence, I realized that the character’s eyebrows still have some problems. I used wrap deformer to rig the character’s eyebrow. It works well when the character is in initial size. However, it becomes a little bit weird when I scaled the character. Frankly, that is a stupid mistake. Since there is no time left to adjust the rig, I told ziyin to scale the character back to the initial size to prevent the BUG happening. Although the wrap deformer can make the rigging process faster, it still has limitations. Next time I will turn to use skeleton to rig the eyebrow, which can be much safer.

What the simulation that zhengzhong wants is the water pouring out of the pot and collide with the water inside the tea cup. He wants the waterline keeps rising, which can show how dangerous the character’s circumstances is. Therefore, I used flip solver in Houdini to simulate that scene. What I did is to create two flip sources: one is for the ground water sim, the other is for the pouring water sim. Both of these sources collide with each other.

My computer is not good enough for high-res water simulation, so the detail is not quite satisfying. What’s more, there is one problem: The camera goes through the water in part of that sequence. Ziyin said that the camera should not be changed. Therefore, I have to use some tricks to deal with this problem.

What I did is to simulate the water twice: one is the water coming from the pot, the other one is the whole simulation with water collision and interaction. Then I used the switch node to switch between these two caches.

When the camera moves, I switch to the pot water cache(without collision) to prevent the water from crossing the camera. After that I cached out the whole sim to another alembic file, which is the final version.

The final version of the sim:

Reflection: After using flip solver for several times, I realize that flip solver is not quite suitable for small space simulation, for the collision of particles often has small issues. If I want a better result, the substeps must be added, which will make the sim much slower. I will do some research on that, trying to figure out a better way to prevent these small problems. Also, I will do some research on how to improve the details of water sim.

Week_07: Simulation & Rendering

This week our team focus on rendering and compositing. Ziyin will finish other animations, while I and wanxuan focus on rendering. Zhengzhong is responsible for the compositing.

Since the reference project still has some problem, I fixed most of the bugs in the first 18 scenes, which took plenty of time and energy. We then reorganized the method of referencing assets to make sure these bugs won’t exist again. The scenes files become more stable after the bug fixed so that we can rendering and layer these scenes smoothly.

Zhengzhong told me that the quality of 2d water textures (animated) in some of the robot’s close-ups is not good enough. Therefore, I used houdini to make some liquid simulation to replace those textures. Although the position of pot and cup in each scene is quite different, I just need to replace the alembic files on the top of the simulation web and recache these files because of the node work flow inside Houdini.

Source&DOP&Cache networks

We rendered nearly 15-18 shots in this week. The time is still limited but we are pretty sure that all the rendering and compositing tasks can be finished before the deadline. In the next week we will fininsh all the things that we left. We cannot wait to see the final version of our animation.

Week_07: Research and Structure

The application of body language and micro-expressions in virtual character emotional expressions

1.Introduction: Emotion can be an important part of performance animation because it can build up the relationships between characters and the audience. Based on subjects such as psychology and evolutionary-biology, scientists believe that body language and micro-expressions have a certain connection to human emotions, and this connection can be decoded in a scientific way. Their research provides the basis for developing the computational modeling of face from automated recognition to make expressions of emotions possible among virtual characters. This paper will base on recent research on body language, micro-expressions and animation techniques, analyzing real cases in film, animation and practices, and trying to find an artistic-scientific blended way to build character emotions in performance animations.

Key words: Facial expressions, Emotions, Animation.

2.Methodology:

3.Main body:

The scientific and psychological analysis development of facial expression and body language:

Guillaume Duchennes development:

The most remarkable investigation of facial expression of its time was by Guillaume Duchenne. It is remarkable because he documented his scientific research with the then-new medium of photography in the 1860s. He investigated facial articulation by stimulating facial muscles with moist electrodes that delivered direct “galvanic” current to key motor points on the surface of face. More recently, Duchenne’s approach of transcutaneous electrical nerve stimulation has been adopted in performance art where a performer’s face can be controlled via computer programs such as Text-to-Speech, with some intriguing results.

The Mimic language:

The Mimic language developed by Hjortsjo is one of the earliest attempts to investigate and systematize the muscular activities that create the diverse facial expressions. Hjortsjo’s motivation was to develop a language for describing facial expression. According to Hjortsjo, mimicry includes the play of facial features, gestures, and postures.

The concept of mimicry includes additional expressive movements in the form of gestures and postures that are characteristic manifestations of emotional states. Hjortsjo refers to these movements as the mimic co-movements, which include movements of the jaw, the neck, the shoulders, the arms, and the hands.

The words of the Mimic language correspond to facial expressions. These words, or expressions, are formed by combining the letters of the language the actions of the mimic muscles and the mimic co-movements. Hjortsjo describes the facial expressions corresponding to twenty-four emotions. These expressions are arranged in eight groups.

Facial Action Coding System (FACS):

The Facial Action Coding System (FACS), developed by Paul Ekman and Wallace Friesen in 1978, breaks down facial actions into small units called action units (AUs). Each AU represents an individual muscle action, or an action of a small group of muscles, a single recognizable facial posture. In total, FACS classifies 66 AUs that in combination could generate defined and gradable facial expressions. As a result, FACS has been used extensively in facial animation over the past decade to help animators interpret and construct realistic facial expressions. The FACS describes the set of all possible basic AUs performable by the human face. According to Ekman, “FACS allows the description of all facial behavior we have observed, and every facial action we have attempted.”

The analysis of body language and micro-expressions in real practices:

I may analyze some examples in feature films or game area to figure out how these characters use their body language and micro-expressions to represent their emotions in layers. The exmples may comes from some works’ sequences which contains various emotions or even multi-layer emotions. For example, The Last of US series, Detroit: BecomeHuman. Se7en, The Silence of the Lambs. I may also use some bad examples and analyse their problems to draw a critical conclusion.

The exploration of virtual character’s body language and micro-expressions:

I may introduce some testing or experimental performance animation by myself, analyse them and point out the problems.

Experimental performance animation refinement and summary:

Conclusions:

blabla

References:

Mandal, M.K., Awasthi, A. (2015). Understanding Facial Expressions in Communication : Cross-cultural and Multidisciplinary Perspectives. New Delhi: Springer India.

Ekman, P., Friesen, W.V. and Hager, J.C. (2002). Facial action coding system. Salt Lake City: Research Nexus.

Osipa, J. (2010). Stop staring facial modeling and animation done right. Indianapolis, Ind Sybex.

Martinez, L., Falvello, V.B., Aviezer, H. and Todorov, A. (2015). “Contributions of facial expressions and body language to the rapid perception of dynamic emotions.”, Cognition and Emotion, 30(5), pp. 939–952.

Kovecses, Z. (2000). Metaphor and emotion : language, culture, and body in human feeling. Cambridge: Cambridge University Press.

Russell, J.A. and José Miguel Fernández-Dols (2002). The psychology of facial expression. Cambridge: Cambridge University Press.