This is the same as the OpenGL clip i posted last week, just rendered.
Again, this was just a test, in no way is this movement the defined style of movement for the Quadbot.
This is the same as the OpenGL clip i posted last week, just rendered.
Again, this was just a test, in no way is this movement the defined style of movement for the Quadbot.
The dome environment is currently at the end of the design/previz stage and at the beginning of the detailing / actual modelling.
Just a few minor props left to block out , but everything still needing many, many polies to look anything like a final…
Here are some shots of the dome and church sets : (still animatic level , materials and lights are totally temp ! )
Rendered in Cycles with the Tesla, 3-4 mins per frame is still too much for an animatic – and indeed we’ll probably use OpenGL render for it- unless we want to design/previz lighting already.
Then some detailing for the bridge and roads : i’m trying different combinations of shrinkwraps , solidify , particle scattering to organize these for close-up (bridge) and mid-ground (rest of the dome) views.
After a few tests , it seems promising : the risk is to rely too much on ‘heavy’ meshes for these tiny greeble details, but seems to be flexible enough for different situations.
As promised, here is a shortish video showing off a bit of the detail in the rig as it stands thus far. As you’ll see, there are still holes in the model and intersections that need to be fixed, this is just how it is at the moment.
Here’s a selection of images I copied from the weekly presentation. As usual a lot more went on, but presenting it all online shouldn’t take us away from working. We do our best! :) Right now everyone’s enjoying an extra long easter weekend! Back in 2 days, have fun too! :)
The detail in the Quadbot is starting to get a little crazy. With each step up in the model’s detail, the rigging has to be matched. With the rig getting up towards almost 800 bones, its starting to show, and the model isn’t even finished yet. I also set myself a challenge of trying to keep away from major mesh deformation (apart from the pipes).
There isn’t a single Armature modifier on this yet.
Yesterday we had the chance to test the Sony F65 at Camalot (who were very nice and helpful). It was very exciting to see such an enormous camera in the hands of our DP. But even though the whole process is digital all the way, getting hold of the data is not as easy as you might think (see yesterday’s post).
So, after getting the data out of the camera, safely transferring it to the Blender Institute and plugging it into the computer, the question is: Now what? We have been shooting in Sony F65 Raw format (.mxf), which generates enormous files. 10 seconds footage are 2.68GB of raw data, unreadable unless you install the Sony F65 viewer app (of course only possible after the annoying registration of all your data!) (Plus a not very user friendly interface. But hey, you can even write emails with it!).
From this app, for which I wasn’t able to find any documentation, you can export to EXR, DPX and MXF (again, this would not be readable for FOSS). The DPX are also for some reason un-readable, so we go to openEXR, as planned. (Just in case someone is interested in that, here’s a patch for DPX. http://projects.blender.org/tracker/?group_id=9&atid=127&func=detail&aid=27397. Any developer willing to have a look into it?)
Today we have been testing the F65 camera at Cam-a-Lot.
It looks quite okay and is probably worthy delivering the footage for Mango. :)
Joris, our Director of Photography, seemed to have enjoyed it very much!
We shot all kinds of tests with it, more stops, less stops, green stuff, grey stuff, Ian, me and even some action scenes!
But: we can’t see it yet! It is stuck in a black box with green blinking lights from Sony. At the moment I am still sitting at Camalot and waiting for the data transfer to be finished. Having 4k footage is nice, but copying it is not very nice. Maybe there are still some bottlenecks so it might get better.
Basically, as far as I understood it, it works like this: You shoot 12 minutes of footage, then you have filled up a 256GB SSD card. This card you stick into a closed black box with blinking lights. You can access the footage on that card through network with a webbrowser. From there you can dump the Sony F65 raw footage as MXF files onto harddrives, one main, one backup. These harddrives will be hooked up to a Mac or a Windows machine with the F65viewer app on it. That can then be used to export the OpenEXR sequences and proxy videos onto the server. Then we will be finally in the save and open haven of linear image sequences.
I hope that soon I can grab the drive with the footage and bike home to the Institute to take the next challenge, the file-conversion. After that we can finally test the whole workflow and see what we can improve. So, no footage yet today!
More to come the next days!
Tomorrow we’re going to do tests with cameras from Camalot in Amsterdam. I’m very happy that we can also test (and most likely use) the new Sony F65 camera, which made everyone in the industry drool! More news & original frame samples will be posted here tomorrow.
-Ton-
Rob Tuytel invited the famous Dutch film maker Dick Maas to check on our work. He immediately accepted! This morning he showed up and spent an hour with us. He saw the full animatic and storyboards (“You should add more wide shots” – duly noted) and had a short demo from the artists (“Is all of that Blender? Amazing!”). I then spent some time with him on production stuff in general, how to organize & finance features efficiently here. I noticed he’d be very interested to direct a 3d animation film once!
He left with a huge pile of Blender training DVDs, we’ll be hearing more of him I bet :)