Digital Makeup

by - 2012/01/02 48 Comments Development

The actual start of Mango-production is getting closer and closer and time is flying.
Since we have an awesome script now and even some great concept-art we can really start to think about HOW we are going to make things happen, technically. I mean, camera-tracking good and fine, but how will we, for example, do the makeup? Can we afford some actual makeup arists, who will turn our actors in some jawdropping vfx characters, covered with gore, blood, bolts, cables, jelly or whatever is required?
Or can we do that all digitally?
The basic idea for digtal makeup is easy, as long as you do not think about it too long. Just have a digital double, track the head, body, limbs or whatever, apply the textures, props or clothes to that digital version of your actor and composite it over the actual footage of your actor. Here’s an example:

This was not too hard to do, but it was a perfect situation:
Enough markers, ok lighting, lots of perspective shift, no fast movements, everything quite in focus, not too far away. And most importantly: No deformation! As soon as the markers move and deform in relation to each other you can forget about doing object tracking.

But think about a real filming situation:
The actors talk, move their face, raise their eyebrows, scream, laugh, jump, run, do sudden and quick movements, have long hair hanging in their face etc. etc.
All this will make a proper object-track impossible. The markers in the footage would be all blurry or totally invisible. There would be not enough information about perspective and movement. The grid of markers would deform.

So in this case we cannot rely on object-tracking, but would have to do planar tracking, as described in this article.
They used the popular tracking software Mocha from Imagineer Systems with a technique called planar tracking. Basically you can track a whole region of your footage, even if it deforms. You don’t need to use any markers, as long as that region has some more or less distinct features. That allows you to easily apply patches with textures on your footage that smoothly follows your actors face and movements. It also makes all sorts of rotoscoping much easier. By the way, let’s not mention that rotoscoping is a whole new area for much needed developement as well… ;)
Unfortunately we don’t have that kind of planar tracking yet, even though our awesome libmv developer Keir Mierle has something in the pipeline called “affine tracking”, which might help in this case. So with some luck we can do digital makeup without object-tracking too. And we really need it!

Now we have a question for you guys: How do YOU do digital makeup? How do YOU do marker removal? We are sure you have some nifty techniques as well, so if you have some ideas you want to share with us, we would be very open for that! After all, this is an open movie, so we would be glad to open up discussion and development for these kind of techniques too, since as far as we know there have not been *that* many attempts to do things like digital makeup in blender before. Or have there?

To open up the discussion, we have created a wiki page where you can post and discuss techniques and ideas. Please feel free to contribute!

  1. [...] Digital Makeup // Be Sociable, Share! Tweet [...]

  2. Eibriel says:

    I love VFX, but please whenever you can do it with makeup, then do it even better with Blender :)

    I’ll be thinking about this problem… :P

  3. Daniel Wray says:

    I certainly think it’s good that Blender is being pushed in these areas, and I would say that having at least one or two digitally created make-up effects would be good, but if it can be done cheaper, faster, and with less man hours on-set by a make-up artist then that’d be the best way to go about it for the project overall. But I know all you guys working on this know that :)

    Having two shots as opposed to 20 or 30 or whatever will save many, many hours of tedious work and two shots would probably be enough of a test bed to see what tools work, and how the work flow could be improved.

    As for removing markers, well I would say that if they are small enough then what you did for your video would be good enough, and I believe that was simply to mask and feather out the areas to blend in the skin over the markers.

    If, however you’re using full areas of the body with blue / green materials and markers then a full digital double with a painted patch area would work well, and in fact you could do a full digital double & patch texture with a standard tracking shot, it all depends on the scene really.

    P.s.

    I’m currently working on a visual effects pipeline for a research paper that’s based entirely around Blender so it would be great to get some feedback / have a discussion about the pipeline in general and see what you guys think, maybe there will be a few things you’s can use for pre-production test shot set-ups.

  4. irve says:

    For planar please find the guy who did the slo-mo program for improving time-lapse shots. It uses the inherent motion info embedded in video while encoding. IIRC he open-sourced it. Might save some coding.

  5. David Jordan says:

    Perhaps you could use two sets of dots, each a different color. One set of dots would be placed where facial expressions wouldn’t deform them and be used to track the object. Then the other set of dots would be used to deform the object to match the facial expression. Perhaps multiple object tracking with a separate skull and mandible.

  6. Ronnie Baldwin says:

    Actually you capture the face then sculpt it…creating a high poly bump map…The guys at Jim Henson’s Creature shop are the one’s who created this very system… Its called digital puppetry!! Go to their website and ask questions they have always helped me in the past with my questions!!

  7. Andreas Mattijat says:

    I think the whole thing is depending on the shots and their content. If you want to show a kind of transformation (from actor to creature or whatever) a digital makeup would be a cool solution. But for most of the stuff (blood, wounds, sweat, dirt, beauty makeup- and don’t underestimate “normal” makeup for actors and actresses) you should go for a makeup artist on set.

    My two cent for tracking. Its absolute essential that you have a kind of 2D-stabilizer in the pipeline. Point or (better) Planar-Tracker. Otherwise you have to 3d-track (and 3d render) every shot which need kind of cg-integration of 2D or 3d elements, retouch and/or removal of objects, etc..

    I guess not every shot in this movie will be a crazy-moving-hand held with lot of perspective shift. Sometimes you can save a lot of time/money with a locked off and some post-2d-movement.

  8. Art Luke says:

    You could always try something like they did for Terminator 3 they put green make-up on Arnold’s face so that they could just key it out and replaced it with the robot metal.

  9. J. says:

    As already been said, get a real make-up artist for a pleasing and consistent looking skin and normal blood/scars/wounds. Use digital make-up for fixing continuity errors and special effects/transitions you can’t do for real.

  10. Milad Thaha says:

    I believe in some cases, it might be simply too difficult to consistently add VFX makeup (or it’ll turn into something that’s for the sake of it). It might be preferable to use real makeup wherever possible, marginally expensive though it may seem. I think it’ll actually save you folks time, and let you focus on things and tasks that are more demanding of the artist’s attention.

  11. Milad Thaha says:

    And yes, what Art Luke said. That seems so much easier to have. Eitherway, you’re going to end up replacing whatever you key, right?

  12. DOOMsayer says:

    I’m sort of a noob in the whole digital art world ,but here is my insight from watching this awesome demonstration:
    For marker removal ,we can use the tracked points data for covering the tracking points:
    1) The user will have to define for each track point ,an area that surrounds this point ,and will be used for color averaging.
    2) The tracking algorithm should know to identify the tracking point color ,mask that color out ,and replace it with the average color of the surrounding area ,as defined by the user.
    Of course ,the smaller the tracking points are ,the better the results will look.

    Regarding the tracking of facial movements (again ,maybe it’s already done this way ,total noob talking here) ,I am pretty sure there are some facial features that do not move (e.g. eyes ,nose bridge ,tip of the nose ,cheek bones). We can then define 2 sets of tracking points. One for the “non moving points” ,that will describe the model’s position in 3D space ,and a second one for tracking any other points against the permanent ones.
    Once we have all that information ,along with some extra “tracking friendly footage” for each shot ,I believe making a rig that will have our makeup and will be able to track our points ,should be relatively easy.

    If you think these ideas aren’t clear enough ,or they are too bright ideas ,feel free to contact me.
    I’ll be happy to help the magnificent Blender community :)

  13. Andreas Mattijat says:

    Marker-removal from a face/skin is one of the hardest, as similar painful as wire removals, which are crossing the face.
    I am using mostly tracked paint strokes (cloning brush) keeps the grain similar. Sometimes you can get away with masking the dots (rotoscoping!) and an offset in frames or/and animated XY-transformation of the same footage underneath.
    You often have to rotoscope extra shadows and highlights.
    Nowadays there are also some cool pixelfill-filters, which can help, too. (you define a mask and the filter clones rgb-pixel values from the rim to the inside…ahem, sorry for my poor description/english). Then there is planar-tracking, which gives you the possibility to replace whole areas (or if you have mochaPro, there is a really awesome removal tab with lots of options.
    And of course the hardcore-option. 3D-tracking the face, 3d model of the face, texturing the face with markerless texture, fill markermask with 3d-rendering. (Nukes and DigitalFusion Projection Method).

    • sebastian says:

      Yeah, a cloning brush would be awesome. Currently we can fake that by using a mask and the translate-node, but that is tedious and annoying. I hope there will be a “real” clone-brush within the compositor.
      The pixel fill filter would be tremendously helpful. What I did in my tests was to blur the whole face with blur-filter and mix that with the footage only on the spots with the makeup-markers. But since the markers are much darker than the skin this leads to darker blurred areas, which then need to be color-corrected. Still a very handy technique, but the method you describe would be much better of course! Someone has to code this! :)
      I found painting the markers away on a static mask not practical in my tests, because if the head moves the lighting changes too much, so painted spots become obvious too quickly.

      • Andreas says:

        Before you blur the whole face you can try to erode/dilate the image. (If there this kind of filter in blender, if not…hui, life will be hard when you guys have to deal with greenscreens :-)). Its not the best method or the best filter, but that’s what i did, before there was all this advanced stuff. I even used scaling down/scaling up the image -or scaling in one direction and back) with different filter methods (to badly resample the image and removing detail)…works sometimes with wires/scratches. But, hey, thank god there are lots of better and smoother techniques today :-).

  14. stephen says:

    I agree that in most productions, you want to go the easiest route to get the best possible result with the finished product, buuuuut, this exercise is supposed to be pushing Blender to be a better program, testing its limits and all. If 200 VFX shots are needed because the main character has a tattoo on his forehead, there would be pressure to develop the best/quickest/simplest solution for doing this in post if makeup isn’t used for the shoot… If a makeup artist is hired, it’s a tried and tested solution that’ll help the development of Blender in no way at all.

  15. Blendiac says:

    @Stephen – I hear your point, but consider that in your example, using a makeup artist *would* help Blender, becasue they’d have more time and resources to actually develop Blender for real world shots people would actually try and create in a studio. Really, for Blender to benefit the most, the film needs to be made in the closest possible way to how a similar size studio really would (if they were using Blender, Krita, Inkscape et al) with the software just developed where it falls short of those needs. Personally I think this difital makeup is awesome and very useful in a bunch of unusual shots… *but* developing Blender just to shots differently to how an actual studio would is kind of defeating the purpose of an open movie imho.

    • stephen says:

      I agree, in most productions, it would be stupid to add tattoos in during post where the option to do it with makeup during production is an option. Buuuuut, what you have to consider is that there are situations in the ‘real’ world where this would have to be done (maybe an actor’s bugs bunny tattoo on his forehead isn’t sitting well with the test audiences and it has to go)… This is the perfect opportunity to tackle such an issue and make sure the solution is solid enough to withstand a quick turnaround & reliable quality.

      Exciting times for Blender…

  16. fPaglia says:

    Is the team planned to be of 6 artists or so or am I wrong?
    I think there could be much more interesting stuff to be researched and done than wasting the time of an artist to add a tatoo on the face of the actor if it is not animated :)
    VFX should improve the possibilities of a shot nor replace the job of a make up artist.
    Most of times 2 hour of make up could save weeks of CGI fix.
    As always careful reading of the script is the best way to balance the resources and choose which way as to be taken.

  17. Hahahaha! Seb, you nailed it! The last effect sure is a killer! :D -Reyn

  18. Michael S. says:

    I guess you can implement something like rotobrushing in after effects CS5… And then track points to recreate this in 3D… but this is all theoretical, I’m sure it is very difficult to implement this IRL.

  19. Lzymxn says:

    I can’t recall what it’s from but there us a reel some where that has a shot of a man in a cockpit, his face appears to be melting, I’m sure the artist said blender was used, it looked amazing, I’m really looking forward to what mango brings to the community

  20. [...] Amazing results and a brief explanation from MANGO site. [...]

  21. PhysicsGuy says:

    Planar tracking seems to be a desirable feature anyway. Consider for instance a scene where one wants to augment or re-texture an existing vehicle that drives by. See for instance the UHaul Walk exercise on http://www.hollywoodcamerawork.us/trackingplates.html

    About the makeup. Real makeup artists might be expensive, but there are acting/art schools in the Netherlands that will probably allow students to participate for study credits.

    Remember what Ian’s warning during his (somewhat incoherent) talk at the Blender conference: be careful with trying to “just do it in post”

  22. Nik says:

    Does this mean that facial tracking (lipsync) is now possible?

  23. ton says:

    This is just one item from a list of research topics; and a fun one even! The technique behind this is quite general, and lends for far more than “make up” only.

    These kind of experiments are very useful, it is a lot of fun to explore things from many different angles. It will give us great benefits when people help checking on the limits and possibilities of current Blender features. We didn’t even start yet you know!

    Of course we are not going to waste weeks of artist time on something a make-up artist can do in a few hours. And yes, I am already working on getting regular make-up solved. Obviously…

    • fPaglia says:

      I really agree with you!
      Until there is time you should test tools and ideas, try, make hypothesis, fail and start again, push everything as far as you can!
      Oh let me add that the example of Invictus was a typical case of “it has to be fixed because the perceived effect wasn’t strong enough”.
      it should NOT be taken as example of a standard production since it can be done only if the whole budget of the film is significantly high and having a bunch of people rotoscoping tracking and fixing is much less expensive than the investment already done.
      Hope it makes sense for you all :)

  24. Would love to watch this, but I’m running Firefox and don’t have flash installed, and vimeo only supports h.264 which isn’t supported in Firefox and etc for patent risk reasons. If you’re going to use a hosting provider as opposed to providing or embedding with the file itself, any chance you could use YouTube+HTML5+WebM?

  25. Eibriel says:

    I’ve started the discussion on Wiki, if you like to take a look:
    http://wiki.blender.org/index.php/Org_talk:Institute/Open_projects/Mango/Open_discussions/Digital_makeup

    (Is about complete character replacement)

  26. Alex Downham says:

    I don’t know if this is of any interest but I found this:

    http://vimeo.com/34504824

    Uses Blender and has digital make up in mind

  27. Ricard says:

    I have no idea about compositing, but have just stumbled upon PCL’s KinectFusion:
    http://www.pointclouds.org/news/kinectfusion-open-source.html

    and immediately thought about this post on Digital makeup.

  28. francoisgfx says:

    rule #0 :
    don’t do in post what you can do on set (the inverse will be true when you’ll be on set though). Especially when you can have fun on putting ketchup on the face of someone.

    And if you want to avoid roto-anim as much as possible. I would say well prepare your fx, shooting, and angles before, so you might be able to hide a few things with some camera angle, avoid high detail on close up.
    But IMO RA is not avoidable in some cases, and the time animator are working on CG character animation on a movie like Sintel, they will probably have to do it on roto-anim in some cases.
    Sometimes you can spend days on a shot to solve it with some fancy algorithm and tracking software, where a good animator will do it in the same amount of time or sometimes even faster. At least you can have a better control over your schedule

  29. Cinemafan says:

    I think the answer to “digital or practical makeup” is fairly clear- use both. Use practical effects for anything that can be put on an actor, and stays relatively fixed (scars, blood, dirt, tattoos, and traditional “monster” prostheses.) Use the digital for transformations, and any character too non-human to be done practically. For transformations, they’re usually fast enough that you could either do a simple track like the one you showed, combined with hand keying, and get away with it. For characters with human bodies but faces impossible to do practically, do full facial replacement- put a rigid mask on the body actor on-set to make head tracking easier- use more cameras if necessary to get a full track. If you don’t want to completely hand key the face, and then do separate facial performance capture with the actor’s head held rigidly in place. That’s what they did on Benjamin Button.

  30. sebastian says:

    Hey everyone!
    Thanks a lot for your feedback!
    As Ton already said, of course we are not going to do simple makeup the digital way. There are enough other challenges than doing a digital lipstick. But as many of you already described it will be more of a mixture between both, digital and real makeup. Especially for transformation stuff or prosthetics this will be a fun technique to explore. (Naturally tests for this will often use not the full range digital prosthetic but simpler things like, well, ketchup. Time is limited! :)
    Since yesterday we also have a cheap but effective way to fake facial mocap in Blender, by projecting 2d markers onto a face-mesh, that we place over a rigid head track. That way we can at least deal with raising eyebrows or other simple facial expressions. Might be enough to really go crazy (but in a controlled way!) with facial transformation close-up shots.
    http://vimeo.com/34571143
    More testing is needed for that. Hey, wouldn’t that be a fun job for you guys? :)
    (Hint: use the “depth object” in the follow-track-constraint in latest SVN to make a 2d track stick to the surface of any mesh)

    • Physics Guy says:

      That is a fantastic feature. You are Sergey make a great team!
      I think I speak for most of us when I say that I would love to contribute, but that I don’t stand a chance of doing this stuff faster than you do *Grin*.

  31. Andrew Downing says:

    Adding to what Ricard said, you can use a Kinect and ROS (which is open source) to do skeletal tracking, like this: youtube.com/watch?v=juDYwSc-jug

  32. walid says:

    Is there a tutorial on how to do this in a step by step fashion.Thanks

  33. Lilly C says:

    This is great!, I’m sure we won’t have to worry about motion blur as much because of increasing frame rate technology. Soon we all will have super high speed cameras and probably just about everything we see on tv will be digital sooner or later :)

  34. David brown says:

    Im having trouble finding you on Cardsapp… How can I find you?

  35. grout medic says:

    I think the admin of this site is really working hard in favor of
    his site, for the reason that here every information is quality based stuff.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>