Here’s a little timelapse of the general workflow for our keying shots.
First we track the camera and save that as a blendfile. From that file we generate a new file that is then used as base for masking and keying. The tracking markers can often be re-used as a way to mask out stuff from the footage.
The cleaned footage is then saved as 4k openEXR files with premultiplied alpha channel.
A third blendfile is created as a base for that layout, swapping the footage for the clean plates and setting up the final shot dimensions (1920×800). That file is then handed over to the person that is then doing the layout for it. The scene layout for this scene was originally done by Andy.
Usually after the main composite is done I fix some alpha-blending issues that cannot be solved in the pre-key outside the main composite.
The key that you see here is still a little but too harsh on the one side of the head, but for the demo I didn’t want to tweak it too long.
Here’s the latest in our Creative Commons dvd training series: “Blender Inside Out”. I’ve asked Jonathan Williamson from cgcookie.com – they have experience with other 3d tools – to make a cool collection of videos for experienced 3D artists; to explain Blender for them by using metaphors and methods they know.
The DVDs will get printed around July 10th, and get shipped to you before August. As usual – profits on our dvd bizz goes to supporting open movie projects like Mango! And as a bonus this time; it will allow us to do a bigger presentation at Siggraph – spreading free copies of this DVD to the audience as well!
So- this last week has by far been one of the most rewarding! We’ve finally primed the pump, and renders are coming out of the farm in a steady flow. We have about 140 seconds of ‘finished-ish’ film- stuff that’s come out of the farm, and more or less works (though in a lot of cases we may through it through the farm again, just to fix something small). Meaning- we’re actually on schedule! For now….
For time-based effects and problems, we can’t see if they work or not till we send them through the farm- and sometimes the farm itself introduces glitches- so we’ve been doing lots of re-renders. But the farm’s keeping up!
I think it’s kind of amazing, actually- all of our posts have been showing the same old stuff over and over (even now I’m just kinda reposting some things you’ve seen before), but we have a ton of new finished stuff sitting around. I should upload some of that. Later!
That said: So many cool final shots! So I’m gonna do a super lazy blog post and just put up some framegrabs!
ALSO: Teaser next week?
Robots climb the church tower! Check out the eyes Kjartan painted on the back of one of em. We’re making all of the robots a bit more individualized.
I’m very happy that our friends at xiph.org have agreed on hosting all of the files we’re producing now and in the next months. In total we expect to have 2 TB of material;
RAW files from the Sony F65 camera (4k)
Linear OpenEXR files in Rec709 gamut (4k, half float), converted from the Linear OpenEXR ACES gamut files as is output from the Sony F65 player/converter software (closed sw).
Cleaned OpenEXR files with alpha (4k)
Final renders in OpenEXR – before grading (HD 1920 x 800)
Final graded files, in OpenEXR, PNG, etc (HD 1920 x 800)
Here’s already the RAW and OpenEXR from two shots in the film (60 GB)! Everything will be released (entirely!) after the film went to premiere. The files then can also be used to complete the DVDs we’ll make for our sponsors; that way they can fully recreate the pipeline with originals!
In this brief post I will illustrate an important part of our pipeline: the shot creation and development. The process described happens for every shot in the movie and appears slightly simplified (footage input and simulations are not taken into account). Here we have a picture of the process, plus a few notes about how production files are organized (on which layers objects should be placed on, and some naming conventions).
Everything starts with a blendfile where the shot is tracked and solved (the track file). This file is then duplicated and used as a starting point for keying and masking (masking file) and also for the main file, where layout, simulations, effects, light and compositing will be used. As soon as the main file is created, all the required libraries are linked in from environment and charachter files, so that the tracked camera can be placed in the right place in the scene. Once that the layout is approved it is possible to proceed with basic lighting and compositing, by creating the proper renderlayer setup. If simulation or effects like gun blasts, explosions or haze are needed, they are added to the same file, but in separate scenes. This system allows us even to use both blender internal render engine and cycles at the same time!
Given the size of our team it makes sense to keep workflow steps as compact as possible, since we can cohordinate on which shot everyone is working. A separate file for animation is created only when needed and it contains mostly liked libraries or objects, such as the camera, from the main file. At the same time, the main file links in masks and actions, so that they can be used in the compositor and on rig proxies.
This system allows a good degree of freedom and flexibily and its proving itself quite reliable.
Other posts about how we deal with footage, simulations and editing will come in the future:)