Feed on
Posts
Comments

Finishing Touches

Taking a break while a shot from The Beast renders for some animation testing.  Figured I’d flesh out the website a bit and add some of my artwork that I did for The Essential Blender.

The story behind this image is that I was thinking about teaching and learning, and creating something like yourself. For some reason, I picture these little guys living inside Blender, building all the things you ask for behind the scenes. They finally decided to honor themselves for a change…


Click for full size, full crop.

More from The Beast

Some different facial expressions from the Beast, with final modeling. He has hands, feet, etc. now. The main rig is done, and I’ll control bits into it for special things as the need arises. Concept art is done on the Mom. As a test, I’ll be modeling her with the sculpt tools and using retopo to get a good loop structure.

Faces of the Beast

Some different looks from the Beast.

Mom Concept Art

Final concept art for the Mom. The facial lines and shading are overemphasized to assist with modeling. The final look will be softer, especially with SSS, and younger.

A New Take on Shape Keys

The Peach Project is thinking about Shape Keys, and as I’ve been giving it some thought and doing some preliminary coding on this myself, I thought I’d share:

The Problem

When doing my own work with facial animation and shape keys, I came across a serious shortcoming which also presented a solution.

The problem, from an animator’s standpoint, is this: when a real face changes shape from, for example, a smile to a frown, all the parts of the face do not move in concert. Just like different parts of the body precede or follow the main motion of an action like throwing, so too do the portions of the face work together, but at different rates, to form expressions. So, a smile is not a static shape. It is the combination of differently timed changes that lead up to and fall away from a pose in a moment in time.

To ignore this is to have faces that move and change expression in an all-at-once, unnatural way.

Clearly, you could create a facial expression as a whole shape key, and then, using vertex groups, break it into component parts and animate them separately each and every time you wanted to change expressions. Just as clearly, this is not an ideal solution.

One proposed remedy to this problem is the use of an NLA-like system for stacking, sliding and controlling shape keys. While this would be an improvement, I think that a more intuitive interface is possible.

Expressions

What we need is some way to create and manage entire sets of shape key Ipos. A full Ipo could be created, for example, for a smile with the different shape keys interacting over time to give a believable build-up to the final expression. This Ipo can then be unlinked from the object, and a new one attached and created for a frown, a look of surprise, or for smaller more localized activities like an eye twitch or lips pursing.

After these shape-based Ipos are created, they can be linked back to the object in a new interface panel (or window type), based on “Expressions.” An object’s Expression is a link to a list of Expression Channels. Each Expression Channel contains a link to a full shape Ipo as defined above, information on the sample range from within that Ipo (start and end frames), and a control slider (or dial) and linked implementation Ipo to store the slider’s values over time.

This allows you to easily keyframe entire developing expressions, going so far as to adjust their timing by tweaking the implementation curve for that Expression Channel’s control slider, just like the Influence keying on Constraints.

The main benefit to this system is that once it is set up, you are animating with simple controllers and common keyframes and Ipo curves, with immediate feedback in the 3D view, but instead of blending from simple shape to simple shape, these keyframes are controlling more complex and (hopefully) more nuanced and realistic character performances.

An Expression/Expression Channel implementation is more suitable to an animator’s shape keying workflow than an NLA approach, especially for facial animation. With a slider (or dial) based system, animators can easily tweak timing and expression on-screen, while they view and work with poses in the 3D view. A strip based system presents an unnecessary level of abstraction, requiring the animator to either work with numeric panels or perform transforms (scale/translate) on a strip to fine-tune facial animation.

For the purposes of time syncing with other NLA-based animation, the Ipo’s generated for Expression Channels would be capable of becoming Actions (Object Actions), and therefore placeable and scalable in the main NLA window.

Conclusion

I’ll be soliciting feedback on this proposal from some animators as well as coders. From a coding standpoint, this is not a difficult project, especially if the programmer is already familiar with the code for the shape key system. I have DNA structs already built for the project, and am reading my way through the shape keying code right now.

Follow Up:

After some discussion with other developers and users, I think that something like this can be implemented with a more general modification to Blender. Although you really have to jump through hoops to make it happen, you can save different sets of shape transformations as Actions. Actions can be arranged and stacked in the NLA Editor. What we would need to effect something like this would be the addition of an Ipo block to NLA Strips that would allow you to key for both Time and Influence. A more general approach like this to the internals would not only benefit shape keying, but give animators and riggers a whole new level of control and automation.

Of course, the NLA interface is less than ideal for a straight-ahead animation workflow as is so often used when working with shapes. However, a simple control setup could be created with Python, or Ipo drivers could be used. That would let people play around with interface elements, and, if people used the system and one setup emerged as clearly superior to the rest, it could be codified in the sources.

I have most of the stuff for the house models accumulated, and now that I’ve had a chance to start working on this again, I’ve begun the Beast. Right now, he’s not very beastly, as he’s all symmetrical and lacks both hands and feet. Beastliness will increase with a, um, marked decrease in facial symmetry.

The Beast himself (in progress):

The Beast In Progress

« Newer Posts - Older Posts »