Week_01: Smokesim_sequence

I decide to use my knowledge to make a cinematic shot by myself. I choose one shot in Blade Runner 2049 as a reference and recreate it.

I firstly rebuild the scene with some assets. For example, the scifi vihicle in the movie and some building. The scene is quite stylish, which covered with fogs and dust. The lighting of the whole scene is also quite stylish, which takes me some time to recreate the atmosphere. I choosed redshift as the render engine to get a fast result.

I also built some stones in Houdini and randomly layout them by adding some random/noise attributes.

I finished the smoke simulation in Houdini after I built the whole scene and animated the vihicle.

I then rendered the sequence with redshift, and then composited it in Nuke.

Term3_Week01: Project2 Brief (Premise Project)

The main idea of project 2 is to make a small project to test a personal creative/technical methodology as “a proof of concept”, which can be considered as a preparation of our FMP. Therefore, I decide to test and dig deeper into some visual effects techniques that may apply into my FMP in the future.

What I like to do in my FMP is an urban legend (or creepypasta) theme short film. The main idea of this film is to tell an abnormal or horror story. It is quite different to other horror topics like zombies or ghost. The film may base on some ideas from Control, SCP (Secure,Contain,Protect), The Backrooms or The Thing.

In project2 I’ll dig deeper into some visual effects techniques of my FMP topic. For example, smoke and fire (for explosion, fire, guns or environment), CFX like cloth/soft object (for creatures) and distructions (for environment). I will mostly focus on these three parts.

The main purpose of this project is to solve the special effects problems that may be encountered in FMP. I will do a deep research on urban legend subjects and their application on films. I will also try to figure out how to make shots for horror urban legend theme films based on some materials and research.

Week_05: Project Topics ideas

Since last few weeks I am focusing on collaborative unit project, I didn’t spend much time on advanced course’s projects. I just did a research before and find something that I would like to dig deeper in my project.

Project 01

For project one, I’m thinking about doing a project which is combined with facial rigging and facial animation. As an animator, I think learning the relationship of rigging and animation is quite important. What’s more, I have great interest in facial animation, espicially CGI style character facial animation, which is more cinematic. Also, I have a foundation of character rigging and facial rigging. Therefore, I want to improve both of these techniques in this project and strengthen my understanding of animation pipeline. My FMP project and thesis will also be related to this topic so I think this project can be considered as a beginning of my FMP, and I will be more confident in my thesis and practice after finishing this project.

Facial animation&rigging
What I did for our collaborative project (body&facial rig), which can still be improved further

What I would like a achieve:

A facial animation or a performance animation(still focusing on facial) which lasts about 20-30 seconds based on references made by myself (the audio clip may come from films). The animation may focus on two aspects: Lip-sync animation and emotional animation. No rendering.

A 10-20s rigging demo which introduces the rigging process.

The total work will be a 30-50s video which combines both elements that I mentioned above.

What I need to prepare:

The project requires more rigging and facial animation techniques which will be used to solve problems that I may meet during the process. I also need to do a research on CGI facial animation (film&games), advanced facial rigging and the industry pipeline.

Time schedule:

2 weeks for research and learning, 3-7days for rigging, 2-3 weeks for animation.

Project 02:

I haven’t decide what to do in the second project yet. Recently I would like to make something based on VFX. I may focusing on smoke&explosion and liquid solvers in Houdini and try to make some cinematic VFX. Recently I have a foundation of Houdini SOP, VEX and several solvers, but I want to dig deeper into simulation area. Since the Houdini viewport render quality is good enough, I don’t need to spend much time on rendering but the simulation, which can save pretty much time.

Recent problem is that I am not sure whether this topic is suitable for this course and I haven’t make a detailed plan for this topic. I may discuss this topic with my tutor next week.

Other artist’s demo, just a reference

Week_04: Mechanical modelling and animation

This week we learned to make a mechanical animation, which is a tricky box that has something inside that can turn off the handle by itself. The tutor asked us to design the thing inside the box, which can be anything like a bear, a tool or a robot arm. The main idea of this design is to practise our creation based on a simple idea.

I firstly did a copy of the box that is made by the tutor in the class. I also added some controler to it so that I can make the further animation easily. After that I remodeled the thing inside the box: a hammer. I decide to use the hammer to hit the handle so that it can be turned off, which makes the box a little bit more playful.

Week_03: Architecture & Environment Notes

Architectural Space vs Digital Space

  • The occupant of a space in real life and digital space shares the same principle of architectural system.
  • Line (1D), Plane (2D) and Volume (3D) -> Configuration of a Form with its characteristic outline.
  • Position of a form is relative to how it is being seen within the environment
  • Space constantly surrounds our being
  • Designing a space brings in the consideration of its function
  • Animation/Video game genre defines the rules

Game vs Moving Image

Game is an activity that is engage to Play and Moving image are Director vision

Definition of Play

Play is the action to use for engaging a Game

  • Play is both like work and rest without being either of each
  • Sometimes we need to play to acknowledge something instead of ‘play’ itself
  • Play can be a form of simulation, it is an experiential learning within a simulated scenario
  • Play is player’s unique behaviour

Narrative

Narrative determines Theme/Style.

Stage & Environment

  • Level, Route / Path
  • Aesthetic
  • Theme and Style that drives with the narrative
  • Art and Architecture
  • Environment that can put into part of the “play”

How game/anime push the narrative for architecture

  • Narrative approach proposes a way of explaining or understanding events through a story or written event
  • Potential in exploring almost unlimited restriction in architecture, yet it contains the logic.
  • The advantage of these technologies can express the form of narrative, the idea of affecting our thinking on architecture with entertainment.

Digital Space – Breaking the Boundary

  • The difference between designing space for reality and for digital are the user/audiance
  • The goal for a game space is creating game experience that serve well with its rule of play and narrative
  • The game genre determine the boundary
  • Simulation needs to clone the elements of our world with the closest details, given the user a life-like experience along the a space.
  • Futurist setting game needs to produce a space that does not appear in the current era of architecture. Creating experience that can only be achieve in virtual world.
  • Inputting the configuration and function of architectural thinking in virtual space can strengthen this relationship and deepen the degree of immersion.

Week_02: Unity

This week we learned the basic of Unity with Herman. The tutorial can be seperated into several parts: Unity viewport, import FPS file, import maya animation to Unity and editing animation in unity.

Unity viewport is quite like some DCC, we can also hold right mouse botton and WASD to travel in the viewport, which is special. The difference between Unity and DCC is that Unity has two viewports: an editing viewport which is designed for settings and lighting, and a game version viewport which is designed for game designers to check the level or other interactive settings.

We can import maya animation to Unity by exporting FBX files. Some problems come out when we drag FBX files into Unity: we cannot see the animation or even the object. The reason why we cannot see that is because the object is too small. We can scale the object by changing object data. Then we need to drag the take object( the light blue triangle object) on the animated object to activate the aniamtion. However, the moving animation doesn’t show in the viewport because the scale information of animation remains still, which is hard to see. Therefore, we can create a empty layer and parent object under this layer. By scaling the empty layer instead, the object and animation can be seen clearly.

Animation can be activated by the light blue triangle object

We can also edit the animation by using animation viewport. It allows the animation to become a loop or even combine several animations together. When the first animation finished, the software will automatically activate the second animation. If we double link both of the animation together, the animation will become a new loop and repeatly play the animation.

Animator viewport
Make a repeated animation by linking different animation nodes

I really like the interactive design and the structure of Unity, it is much smaller than Unreal Engine, which requires better hardware to drive. Therefore, Unity is a great platform for beginners to learn the basic of game design and game engine. I think maybe I will learn Unity in the future to get a better understanding of game design and its relationship with animation.

Week_02: Motion capture_02

This week we learned how to apply our motion capture data into a rigged body to make the character alive. The tutorial can be seperate into three parts: applying data into a rigged human skeleton system and combine sequences, applying data into a rigged human skeleton system and create controlers, and applying data into a rigged non-human skeleton system. The main idea is to link data between motion capture data and the skeleton by using Human IK in maya.

Applying data into a rigged human skeleton system & combine sequences

Firstly, import the motion capture FBX file into maya and create a sequence from this motion capture data in time editor. We can see that all the frames are animated.

Then we need to use Human IK to create a new character definition and link each part of the motion capture skeleton to the definition so that maya can identify different parts of the skeleton system.

After that, import the character that we wangt to link with. Set character in Human IK as this character, then set source to our definition. Now the character starts following the motion capture data.

We can import another data by draging FBX file into time editor (not the viewport), which create a new sequence. Then right click and use match relocators to relocate the position of character skeleton, we need to choose foot joint to match. After that the position is matched, we can blend two sequence to make the connection of two sequences more natural.

Applying data into a rigged human skeleton system with no definition & create controlers

Firstly, do the same things in first part, remember to change the character into T pose so that it can match to the data correctly. Then create a definition for the new character so that it can link to the data. After that link the two definitions like part 1 and click create control rig botton in Human IK to make controlers on character.

We can also create an animation layer to optimize our animation.

Applying data into a rigged non-human skeleton system with controls

Firstly, do the same things in first part, remember to change the character into T pose so that it can match to the data correctly. Then click create custom rig mapping in Human IK and match the controlers on characters to the controler system in Human IK(it can be considered as bend the controlers with motion capture skeletons)

Week_01: Motion capture_01

This week we learned the basic of motion capture with Anna. Basically, motion capture is something that can be used to capture real human movement data, which can be applied to other rigged models to make a realistic animation, which is quite efficient.

Tony weared the motion capture clothes on and did some actions like dancing or fighting, and the result can be seen on the screen in real-time, which is quite amazing. However, this suit can only capture the body(without hands) movements so that we cannot see how motion capture works on human face and hands, which is a pity.

From my point of view, I think motion capture is really handy, which helps animators focusing on more detailed works rather than something like setting keys. However, I still think that hand-made animations are irreplaceable because only animators can manage the details of facial expression or gestures, and make the animation more artistic. In brief, motion capture can be considered as a useful tool, but it still has limitations.

Week_01: Notes

Storyboarding

style determination

  • humanoid vs cartoon character
  • real world simulation vs anime
  • 2d vs 3d

working with hierarchy

determine how many frame and detail you want to show off for a singular action

Pose to Pose

Stage Appeal

proportion can also gives emotion and characteristic

Keep it Simple

Form follows function-Louis Sullivan

Staging

  • Camera angle
  • Timing
  • Acting
  • Setting

Camera:

  • controls the presentation
  • screen space is restricted by camera position, VR is not

Camera distance:

  • Character
  • Buildings

Timing:

pausing

Acting:

Silhouette technique: camera angle

Setting:

Emotion, characteristics and themes

the design of lighting and color can reflect the emotion statement of characters

Props: things in the space should align with the character

Detail balancing: do not overdetail