During these weeks before the filming and the actual work on the shots starts we are doing several tests to check out the production tools. One of them is obviously Cycles. In this particular test we wanted to see how far we can go with the integration of the robot hand in a shot.
Of course it is obvious that the hand is fake, after all it’s a robot hand, so we also put in objects that could in theory be actually standing on the table. We went for a mirror ball (because there was an actual mirror ball in the scene, so, easy to compare) and a plastic toy.
Yesterday we had the chance to test the Sony F65 at Camalot (who were very nice and helpful). It was very exciting to see such an enormous camera in the hands of our DP. But even though the whole process is digital all the way, getting hold of the data is not as easy as you might think (see yesterday’s post).
So, after getting the data out of the camera, safely transferring it to the Blender Institute and plugging it into the computer, the question is: Now what? We have been shooting in Sony F65 Raw format (.mxf), which generates enormous files. 10 seconds footage are 2.68GB of raw data, unreadable unless you install the Sony F65 viewer app (of course only possible after the annoying registration of all your data!) (Plus a not very user friendly interface. But hey, you can even write emails with it!).
From this app, for which I wasn’t able to find any documentation, you can export to EXR, DPX and MXF (again, this would not be readable for FOSS). The DPX are also for some reason un-readable, so we go to openEXR, as planned. (Just in case someone is interested in that, here’s a patch for DPX. http://projects.blender.org/tracker/?group_id=9&atid=127&func=detail&aid=27397. Any developer willing to have a look into it?)
Today we have been testing the F65 camera at Cam-a-Lot.
It looks quite okay and is probably worthy delivering the footage for Mango. :)
Joris, our Director of Photography, seemed to have enjoyed it very much!
We shot all kinds of tests with it, more stops, less stops, green stuff, grey stuff, Ian, me and even some action scenes!
But: we can’t see it yet! It is stuck in a black box with green blinking lights from Sony. At the moment I am still sitting at Camalot and waiting for the data transfer to be finished. Having 4k footage is nice, but copying it is not very nice. Maybe there are still some bottlenecks so it might get better.
Basically, as far as I understood it, it works like this: You shoot 12 minutes of footage, then you have filled up a 256GB SSD card. This card you stick into a closed black box with blinking lights. You can access the footage on that card through network with a webbrowser. From there you can dump the Sony F65 raw footage as MXF files onto harddrives, one main, one backup. These harddrives will be hooked up to a Mac or a Windows machine with the F65viewer app on it. That can then be used to export the OpenEXR sequences and proxy videos onto the server. Then we will be finally in the save and open haven of linear image sequences.
I hope that soon I can grab the drive with the footage and bike home to the Institute to take the next challenge, the file-conversion. After that we can finally test the whole workflow and see what we can improve. So, no footage yet today!
More to come the next days!
After finishing the Track, Match, Blend DVD this week I could finally start doing something more useful for the production.
In order to help with the reconstruction of the set at the Oude Kerk I have started tracking various shots that I took there, getting a more or less good geometry reconstruction out of that. But there is always a slight error margin in these tracks, so the resulting geometry is never 100% perfect. Still, it is very useful to get a sense of scale and proportion with some actual footage. The nice thing about Blender’s tracking system is the flexibility to link in several shots at once and align the cameras manually, to get multiple views aligned, and thereby getting a pretty big point-cloud reconstruction of the scene.
For all of you who are already waiting for too long for the Track Match Blend DVD here is a little preview chapter about using manual undistortion for a better camera solution and using reference images from Google Maps to check if the solution makes sense.
I am working hard to finish the DVD within the next days! Sorry for the delay!
Today we did a little tour to the Oude Kerk here in Amsterdam and enjoyed the first sun of spring. Blogging in the sun while hanging around in the beanbag in the Blender Institute is also quite pleasant, I can tell you!
Anyway, the light in the church is beautiful! We have to find a way to put this in the movie while at the same time have it destroyed by robots. Also the trees and bicycles in front of the church and around the redlight district might become a masking nightmare too. Or we just cover them with futuristic tech-stuff. But let’s see what happens.
I couldn’t help but film and track something in the church. But I have other things to do than putting something in it, so if you guys are interested in some compositing fun, here’s the track and footage. Do something cool and link here! :)
As you all know, Mango is not only meant to create an awesome short film, but also a way to focus and improve Blender development. We already have great new tools, but for a real open source VFX pipeline we need a lot more!
Here are the main categories for development targets we like to work on the next 6 months (in random order):
Camera and motion tracking
Photo-realistic rendering – Cycles
Green screen keying
Color pipeline and Grading tools
Fire/smoke/volumetrics & explosions
Fix the Blender deps-graph
Asset management / Library linking
How far we can bring everything is quite unknown as usual, typically the deadline stress will take over at some moment – forcing developers to just work on what’s essential – and not on what’s nice to have or had been planned. Getting more sponsors and donations will definitely help though! :)
Below is per category notes that have been gathered by me during the VFX roundtable at Blender Conference 2011, and in discussions with other artists like Francois Tarlier, Troy Sobotka, Francesco Paglia, Bartek Skorupa and many others, and some of my own favorites.
I have to warn you. This post is looong. :)
Camera and Motion Tracking
Even though the tracker is aleady totally usable, including object tracking and auto-refinement, there can be some improvements too.
One major feature that we are waiting for to be included is planar tracking. Some of you might know Mocha, a planar tracker widely used in the industry, with which you can do fast and easy masking, digital makeup, patching etc. In a lot of situations you don’t really need a full-fledged 3d track just to manipulate certain areas of your footage. All you need is a tracker that can take into account rotation, translation and scale of the tracked feature in 2d space, for example in order to generate a mask that automatically follows the movements and transformations of the side of a car as it drives by.
Keir Mierle has something in the works that would allow such workflows. Obviously that would be tremendously helpful for masking and rotoscoping as well.
The actual start of Mango-production is getting closer and closer and time is flying.
Since we have an awesome script now and even some great concept-art we can really start to think about HOW we are going to make things happen, technically. I mean, camera-tracking good and fine, but how will we, for example, do the makeup? Can we afford some actual makeup arists, who will turn our actors in some jawdropping vfx characters, covered with gore, blood, bolts, cables, jelly or whatever is required?
Or can we do that all digitally?
The basic idea for digtal makeup is easy, as long as you do not think about it too long. Just have a digital double, track the head, body, limbs or whatever, apply the textures, props or clothes to that digital version of your actor and composite it over the actual footage of your actor. Here’s an example:
This was not too hard to do, but it was a perfect situation:
Enough markers, ok lighting, lots of perspective shift, no fast movements, everything quite in focus, not too far away. And most importantly: No deformation! As soon as the markers move and deform in relation to each other you can forget about doing object tracking.