Konstantin Kharitonov you seem to understand this. Why isn’t there a tool for creating a 3D model of a person in motion (by taking photos of them from multiple angles, asking them to make various facial expressions, possibly by projecting a laser grid on them), then process what was captured, and by combining it with some universal knowledge base about human anatomy, achieve a 3D model of a specific face, which can then depict any desired facial expression on a computer just because a) muscles work the same way in all people b) the model knows the ranges in which certain muscles move c) the model knows how the skin, eyes, mouth, and ears look when certain muscles are tensed or stretched.
After all, no one limits us in the number of photos taken from different angles and under different lighting conditions. So even if there are a million photos, it’s no problem, we’ll wait. But then, the system could extract the photo from the needed perspective under the required lighting to render realistic expressions of a specific face for any artist’s requirement.
I imagine this as a framework for 3D designers, allowing them to simply specify any facial expression, but also to alter the face itself (e.g., widen the eyes or protrude the nose). With a 3D model from a camera, import these facial features from a “scanner”, as well as the skin texture for more accurate rendering.
Is there such software?