Facial Motion: Motion Capture-Based Animation

896 Words4 Pages
INTRODUCTION: Motion capture based animation is very popular nowadays. It’s various applications include movies, video games, and human computer interface designs. By capturing the motion of a human facial’s expression, it become easy to make the animated character more lively and realistic. Nowadays computer animated characters are essential component of computer games, movies, web pages and various applications. In order to make these characters realistic and convincing they require complex facial expression and motion. Traditionally, facial expressions had been produced through manual key frames techniques. It gives the best quality animation but it is a costly and slow process. This report attempts to resolve the problem of data representation…show more content…
Here we organize this report into the following categories: speech content editing, facial expression analysis and modeling, facial motion retargeting, and head motion synthesis. Speech motion synthesis Speech motion synthesis, which is also known as lip-synching, refers to combining facial motion that is synchronized with input speech. Most approaches mark the input speech signal based on standard speech units, such as phoneme. This can be manually or automatically. These speech units are then mapped to a set of lip poses, called Visemes . Visemes are visual counterpart of phonemes, which can be interpolated to produce smooth facial animation. The shape of the mouth during speech not only depends on the phoneme currently pronounced, but also on the phonemes coming before and after. This phenomenon is called co-articulation. Co-articulation can affect up to 10 neighboring phonemes simultaneously. Considering that the English language has 46 phonemes it is clear that a simple lookup table is not a practical solution. There are many approaches that attempt to solve the lip-synching problem and model co-articulation. Most of them fall in the following four…show more content…
[2004] learn a linear dynamical system from recorded speech video clips. The system is driven by both a deterministic speech input and an unknown random input. Because of the limitation of the model, only video clips for single word or very short sentences can be synthesized. Therefore, co-articulation cannot be fully modeled with this approach. Expression synthesis: Fashion expression of a character reflect the motivation and emotional states, it can be used for verbal communication, and allow personal feelings. Facial expression can be describe in parameters using a set of Action unite. These Action units describe the facial movements and facial poses in small regions, like “Pulling lip corner” .Video base facial motion using images and motion data based approaches describe motion by using the trajectories of mocap marker. Main problem is how to control the facial expressions during speech. To solving this issue for video base representations, researchers give the factorizational models. For example: They show how face images can be classified according to different personal identities or different facial poses, and how to separate speech content from

More about Facial Motion: Motion Capture-Based Animation

Open Document