Week_02: Texturing test & Model Rigging

Rigging:

This week I rigged our robot model and the newspaper, which can be used for layout and animation.

Robot

The robot has ik&fk arms and global Ctrl, which is suitable to animate different kind of animations. A space switching attribute is created to allow the arm to follow the world space or the robot itself. I also create SDK groups to let animator make robot’s fingers grab animations by using attributes, which is much faster than animating with finger controlers.

I asked my teammate to test this rig and it works correctly. Recently the elbow is a weighted skin, which may be replaced by a rigid elbow component later.

Newspaper

I also did a newspaper rig, which can be folded and open. Each corner of the newspaper can be bended individually and a single page can be turned by using ik controls on it, which fits the requirement of our animation. I also create two controlers on both sides of the newspaper so that animator can animate a folding animation by adding constains to the controler and character’s ik arm controler. However, this rig is still not flexible enough since the skin weight has some little problems. I will keep testing it to make sure it can be used safely in our project.

The rig can still be improved basing on the requirements of animators. I may update this rig in the next week.

The next step is to model and rig our character. We will need to rig a full body character, which has body controls and face control so that it can fit the requirement of performance animation.

Texture test:

I also did a texturing test this week for our project to make sure the graphic result can be achieved in later process. We did some research first and finally decided to render some kind of style like this:

We want to represent the feeling of the lines, spot and the light in reference.This reference is made by using 3D and hand drawing, but it can still be achieved totally in 3D by editing models can textures.

I changed the texture of coffee maker in substance painter to test the result. The final render is made in arnold:

I used some nodes in arnold lights and rough masks to simulate the light in reference, which is quite closed. The spots on textures can be achieved easily in SP by using some masks. Recent problem is that the shadow is still quite “3D” and the graphic is not flat enough. I will make a new test next week to test more textures and try to achieve a better result.

Week_02: Unity

This week we learned the basic of Unity with Herman. The tutorial can be seperated into several parts: Unity viewport, import FPS file, import maya animation to Unity and editing animation in unity.

Unity viewport is quite like some DCC, we can also hold right mouse botton and WASD to travel in the viewport, which is special. The difference between Unity and DCC is that Unity has two viewports: an editing viewport which is designed for settings and lighting, and a game version viewport which is designed for game designers to check the level or other interactive settings.

We can import maya animation to Unity by exporting FBX files. Some problems come out when we drag FBX files into Unity: we cannot see the animation or even the object. The reason why we cannot see that is because the object is too small. We can scale the object by changing object data. Then we need to drag the take object( the light blue triangle object) on the animated object to activate the aniamtion. However, the moving animation doesn’t show in the viewport because the scale information of animation remains still, which is hard to see. Therefore, we can create a empty layer and parent object under this layer. By scaling the empty layer instead, the object and animation can be seen clearly.

Animation can be activated by the light blue triangle object

We can also edit the animation by using animation viewport. It allows the animation to become a loop or even combine several animations together. When the first animation finished, the software will automatically activate the second animation. If we double link both of the animation together, the animation will become a new loop and repeatly play the animation.

Animator viewport
Make a repeated animation by linking different animation nodes

I really like the interactive design and the structure of Unity, it is much smaller than Unreal Engine, which requires better hardware to drive. Therefore, Unity is a great platform for beginners to learn the basic of game design and game engine. I think maybe I will learn Unity in the future to get a better understanding of game design and its relationship with animation.

Week_02: Motion capture_02

This week we learned how to apply our motion capture data into a rigged body to make the character alive. The tutorial can be seperate into three parts: applying data into a rigged human skeleton system and combine sequences, applying data into a rigged human skeleton system and create controlers, and applying data into a rigged non-human skeleton system. The main idea is to link data between motion capture data and the skeleton by using Human IK in maya.

Applying data into a rigged human skeleton system & combine sequences

Firstly, import the motion capture FBX file into maya and create a sequence from this motion capture data in time editor. We can see that all the frames are animated.

Then we need to use Human IK to create a new character definition and link each part of the motion capture skeleton to the definition so that maya can identify different parts of the skeleton system.

After that, import the character that we wangt to link with. Set character in Human IK as this character, then set source to our definition. Now the character starts following the motion capture data.

We can import another data by draging FBX file into time editor (not the viewport), which create a new sequence. Then right click and use match relocators to relocate the position of character skeleton, we need to choose foot joint to match. After that the position is matched, we can blend two sequence to make the connection of two sequences more natural.

Applying data into a rigged human skeleton system with no definition & create controlers

Firstly, do the same things in first part, remember to change the character into T pose so that it can match to the data correctly. Then create a definition for the new character so that it can link to the data. After that link the two definitions like part 1 and click create control rig botton in Human IK to make controlers on character.

We can also create an animation layer to optimize our animation.

Applying data into a rigged non-human skeleton system with controls

Firstly, do the same things in first part, remember to change the character into T pose so that it can match to the data correctly. Then click create custom rig mapping in Human IK and match the controlers on characters to the controler system in Human IK(it can be considered as bend the controlers with motion capture skeletons)

Week_02: Experimental Animation 1 Summary

This week we learned the basic of experimental animations and watched several videos to strengthen the idea of this type of animation. One of these animations impressed me a lot which is An Optical Poem, made by Oskar Fischinger in 1938.

Oskar Fischinger, 1938 An Optical Poem

Frankly, I am not some one who can really appreciate experimental animations since I never done this before, but I am deeply impressed by Oskar Fischinger’s work after I saw it. It is hard to imagine how the artist made this masterpiece 80 years ago. The changing of these shapes perfectly fits the rhythm and the color design of the whole scene is powerful and impressive. One interesting is that recently this kind of animation form is widely used in some music games, which can be considered as an extension of this kind of experimental style. The idea of this animation is ahead of the times, and it still looks amazing after years.

I think the reason why it can be considered as an experimental animation is that it talks about the relationship among shapes, colors and sounds. It is a great try to mix these things together to bring the audience a great experience on watching or listening. This work can also be considered as a remaster of the music, which contains the ideas and attitude of Oskar towards this music. Although the form of this demonstration can be little abstract, we can still understand the mood inside of this work.

Week_02: Summary

This week I watched several videos that Friendred recomanded, which talk about composition, camera and editing in film productions.

From these videos we can see how film masters like David Fincher and Akira Kurosawa use their camera, composition and editing to tell a better story and build up a great connection between fim and audience. David Fincher is one of my favourate film directors who is good at using camera to trick audience’s eyes. He likes using motion tracking and micro camera tilt to capture detailed movements of characters to show their emotional statement. Such kind of camera shot can be found in most of his works like Se7en, Social Network and Gone Girl. This can be a quite strict shooting method because the cameraman needs to shoot the same scene for tens or hundreds of tapes to follow the motion of characters accurately. However, his works prove that this method is quite effective and powerful. As for 3D production, David’s shooting method can be represented in digital scenes in a more effcient way because we can adjust camera tilt more easliy by changing the data of camera, which can save a lot of time. Frankly, I haven’t use this method in my animation production before, maybe I’ll try to use this technique in this collaborative unit.

As for Akira Kurosawa, however, his composing movement design is quite unique and stylish, which is hard to mimic. However, the mechanic inside of them is still instructive. As the video said, the use of nature (rain, smoke), movement of groups, the exaggeration of individual movements, fluid camera use and movement of cut can be the key elements in Akira Kurosawa’s films. The use of nature can create a great atmosphere, which can also represent characters emotions. The movement of cut is also inspiring, which can be used in animation editing to improve the sense of cinematic. The movement of groups can also create an amazing scene for film. However, it can be difficult to achieve in animation films since the movements of creatures or human can be quite complicated to animated, or even captured.

A great design of cutting can not only improve the quality of shot but also make the whole production more tight so that the rhythm of the film can be more reasonable. In video Cuts & Transitions 101, most of types of cut in film editing are mentioned and some of them can be used in our collaborative unit project or other productions. For example, we designed a dreaming sequence in our storyto show the scary emotion of the character. Therefore, we can use smash cut to pull the character back to reality and finish the transformation from a tense sequence to a slow-beat narrative. Also, the close-up cross cut can be used to show the nervous emotion of character when the safe environment around him is broken.

Someone said that there is no better camera shot but the better use of camera shots, which points out the importance of shot design. This conclusion can also be used to discuss other things like editing, setting, etc. From my point of view, I think practice is the best way to master these techniques. By using lens language and editing techniques consciously in productions, we may find the correct way to arrange our shot, or even find our own style.

Week_02: Collaboration Unit Project Brief

Team members:

Zhengzhong Liang, Wanxuan Liu, Ziyin Wang, Guanze Wu.

Project: Shut Down (Tentative)

The project is to make a cel shading near-future sci-fi animation short film.

Cel shading is a type of NPR (Non-photorealistic rendering), which can be also called toon shading. It uses less shading color instead of a shade gradient or tints and shades in 3D production to flatten the animation graphics and simulate the feeling of 2D animation.

Cel Shading vs Traditional Shading

Story draft: 

In a near-future, robot is widely used in people’s daily life. A computer virus explosion lead to a chaotic, which strains the relationship between the oversensitive protagonist and his suspected robot. A misunderstanding drives the protagonist to fight with his suspected robot, and died accidentally in this fighting. The story finally shows that the robot is not affected by the virus at all. 

Key Words: Near-future, Computer Virus, Absurd Story

Task Assignment(Recently):

The process of this project may covers: Storyboard, Modeling, Texturing, Rigging, Animation, Lighting and Rendering, Composite and Editing. This task assignment is for reference only, which will be adjusted later basing on the progress rate and other requests.

Wanxuan Liu: Storyboard, Editing, Visual effects

Ziyin Wang: Layout, Animation, Composite

Zhengzhong Liang: Modeling, Texturing, Rigging, Lighting and Rendering,

Guanze Wu: Modeling, Texturing, Rigging, Lighting and Rendering

References & Mood Boards & Sketches:

Ideas:

We decide to breakdown the story into three parts: Background and Attempt, Chasing and Fighting, Result.

Background and Attempt: We decide to use voiceover like TV news to release the message of computer virus explosion. We also decide to use dream sequence to show protagonist’s horrible imaginations after he heard the news, which strengthens his attempt to get away from his robot or even try to kill it.

Chasing and Fighting: This part can be the most important part of this story. How to design the interactions among character, robot and props in the scene, and how to exaggerate the dramatic relationship between character and robot is still under discussion.  

Result: We decide to end this story with an absurd ending. The character died because of the misunderstanding.  

Week_01: Motion capture_01

This week we learned the basic of motion capture with Anna. Basically, motion capture is something that can be used to capture real human movement data, which can be applied to other rigged models to make a realistic animation, which is quite efficient.

Tony weared the motion capture clothes on and did some actions like dancing or fighting, and the result can be seen on the screen in real-time, which is quite amazing. However, this suit can only capture the body(without hands) movements so that we cannot see how motion capture works on human face and hands, which is a pity.

From my point of view, I think motion capture is really handy, which helps animators focusing on more detailed works rather than something like setting keys. However, I still think that hand-made animations are irreplaceable because only animators can manage the details of facial expression or gestures, and make the animation more artistic. In brief, motion capture can be considered as a useful tool, but it still has limitations.

Week_01: Notes

Storyboarding

style determination

  • humanoid vs cartoon character
  • real world simulation vs anime
  • 2d vs 3d

working with hierarchy

determine how many frame and detail you want to show off for a singular action

Pose to Pose

Stage Appeal

proportion can also gives emotion and characteristic

Keep it Simple

Form follows function-Louis Sullivan

Staging

  • Camera angle
  • Timing
  • Acting
  • Setting

Camera:

  • controls the presentation
  • screen space is restricted by camera position, VR is not

Camera distance:

  • Character
  • Buildings

Timing:

pausing

Acting:

Silhouette technique: camera angle

Setting:

Emotion, characteristics and themes

the design of lighting and color can reflect the emotion statement of characters

Props: things in the space should align with the character

Detail balancing: do not overdetail

Week_01: FMP Thesis Proposal Ideas

Consider the following questions and try to provide brief answers on your blog for next week.

  • On graduation which area or environment of production do you wish to focus upon and why?
  • What skills will you need to attain the standards required for vocational practice?
  • How will you showcase your FMP practice for the final shows?
  • Is it important to directly connect the thesis research to your practical work?
  • Do you have an area of research you wish to conduct that is unrelated to practical element?

There are several areas that I would like to focus on in my thesis and project:

Area 1: Facial performance animation

Facial performance animation is an area that I would like to develop an in-depth study and research in. I was moved by the facial performance in some CGI films and games like The Last of Us, L.A Noire and Resident Evil series. It has been applied in animation film production for years, and its applications in computer game area is increasing. With the technology updating, it can be forseen that facial performance animation will be used in a wider area in the future.

From my point of view, great facial performance animation can enhance the development of characters, which also builds up a close relationship between viewers/players and characters. The small details in facial animation can covers tones of information, which is more artistic and playful. Although some of realistic facial animations are made by motion capture, I still believe that hand-made facial performance is more impressive and touching.

The Last of Us Part2 facial and body performance by Andrew Ford

To enhance facial performance animation, several things must be learned: face anatomy, facial expressions, rigging. Also, learning the basic use of motion capture and thesis of acting are important.

Area 2: Realistic shading for characters

Another topic that I’m interested in is realistic skin shading, which can be also called the subsurface scattering (SSS) skin shading. Realistic skin texture may covers several areas like facial and body details sculpting, muti-channel texture manufacture and rendering.

The research of this area can also be expanded. For example, blended dynamic normal maps, which can be driven by rigged controlers to get a better facial performance result. Another example is the research on the diffrence of realistic shadings between races, which may covers more detailed thesis. What’s more, the relationship between emotions, actions and skin textures (people’s actions and emotions will change the performance of their skin textures. For exmple, when people angry their face may turn red)

To enhance skin shading, several things must be learned: anatomy, SSS shading, textures, skin lighting and rendering. Simple rigging techniques are also required.

Area 3: Facial rigging or other advanced rigging

A better research or practice of character rigging or animal rigging may improve my understanding of softwares, animations and film/game production. Therefore, facial or other advanced rigging techniques can be another area of my thesis writing. These techniques may cover: FACES (Facial Action Coding System) rigging techniques, quadruped creature rigging or other things.

Week_01: Intro to the unit and projects

Create an entry for this week detailing your responsibilities, thoughts, what projects you are looking forward to working on. What you are aiming to get out of it and where you are wanting your career to be heading.

Make a Team

Recently our team is formed by 4 teammates: Zhengzhong Liang, Wanxuan Liu, Ziyin Wang and me. Unfortunatly we still haven’t found a student from other majors to coorperate, but we will try again in next week.

Project

Our project is to make a cel shading near-future sci-fi animation short film. Cel shading is a type of non-PBR rendering, which can be also called toon shading. It uses less shading color instead of a shade gradient or tints and shades in 3D production to flatten the animation graphics and simulate the feeling of 2D animation.

This week our group had several meetings discussing about our story draft and tasks for each team member. Everyone in our team wrote a short story based on the near-future scifi background. We discuss each story and finally choose one as our story draft. We then redesigned part of this story and then polished the logic and plot inside of it.

In brief, our story is: In a near-future, robot is widely used in people’s daily life. A computer virus explosion lead to a chaotic, which strained the relationship between the oversensitive protagonist and his suspected robot. A misunderstanding drives the protagonist to fight with his suspected robot, and died accidentally in this fighting. The story finally shows that the robot is not affected by the virus at all.

My responsibilities

The project works can be seperated into several areas: storyboard, modeling, texturing, animation, lighting and rendering, composite and editing, which is a normal pipeline for a 3D animated short film.

Recently I am responsible for part of the modeling, texturing, lighting and rendering. Since everyone’s responsibilities is dynamic, the schedue can be changed later. I will carefully weight my responsibilities and represent them in my weekly blog posts.

Thoughts & Research