Yesterday I sat down with Brecht and Sergey to go over the main development topics, checking if we’re still on track and still have the big picture in mind. Because of the current workshop week we didn’t go over issues extensively with the artists, for that we’ll have plenty of time later. Here’s a short summary of what we discussed.
- Motion tracker: is in good shape already, a new solver is underway to test. No bottlenecks.
- Cycles render: will be seriously used. Brech is unsure how fast it’ll be in our production setup. We will do GPU and CPU (farm) comparision tests. Missing features are known topics (like shadow & id passes). He’ll also check volume render. Antialiasing and sampling (FSA) is an issue. A more detailed Cycles review we’ll do in 1-2 weeks here with the team.
- We will need light probes or environment mapping (and stitching). Worth to investigate is efficient methods to extract light conditions from footage. Sergey loves to dive into this.
- 3D viewport: Brecht will check on overlay methods to enhance selection/active info, especially in rendered display.
- Compositor project: some nodes – required by tracking – will need porting to opencl still. Might become a bottleneck.
- Green Screen Keying: we will investigate best practices and state-of-the-art articles on this. My suggestion is to connect keying (mask extraction) to the clip-editor, using markers and tracking info and temporal filter options etc. Jeroen Bakker and Pete Larabell are interested to help too.
- Depsgraph: we’ll try to focus on solving the crucial failures. Like the ‘dependency cycle conflict’ for piston cases and essential driver updates. As a bonus – when there’s time – we can try multi-threaded anim updating. The “proxy armature” also will have to get attention.
- Getting Alembic to work would rock too… it would allow to combine a lot of real-time characters in a shot for animators and shade/lighters.
- Color pipeline: the confused code for alpha and color spaces will have to become stable and useful (also on UI side, to clearly communicate things). OpenColorIO needs to be investigated still by the team.
- Asset managing: continue work with Andrea Weikert on it (or gsoc student?) or help out ourselves.
We’ll keep you posted, next week we can do an artists’ version of the above :)
Keying linked to motion tracking would be fantastic..so would be Alembic support..
About cycles i’m really looking forward to motion blur..
Right, motion blur is crucial!
I wonder is the film going to be edited in VSE as it is?
I would love to know that too! I’m always watching for VSE improvements.
me too, I would love to see compositor nodes available for VSE scripts!
Eek! Multithreaded anim would be really welcome. Gotta love the 9fps I get on Gilga!
Anyway, the morale is…nothing pushes you more forward than a real production!
Thank you for all your hard-earned work. I just hope that the files you give us after the project is not like the ones in Sintel, where those blend files are no longer usable with the later version of Blender.
Green screen keying (or whatever color for that matter) is different from rotoscoping. So I’m not sure to see the point of moving keying tools to the clip editor rather keeping it to the compositor ?
There is so much stuff you can do to a footage before keying it (pre-filter stuff) where all other tools can become handy.
And for rotoscopy in general, to do something similar as this https://vimeo.com/25823072. I would keep it in compositor as well. There is so many use at different step of your comp.
Moving all this to clip editor would make the keying and roto only as “Roto-prep” step, which is good somehow, but will narrow the pipeline and workflow rather than moving everything into compositor directly.
Or then you’ll need input/output data into compositor to permit this kind of things. As for now only datas comes from the clip editor into the compositor. But you can’t output a comp into a clip editor to do some work on it. And to be honest that would make it more complexe in workflow.
I honestly believe that roto & keying is a full part of the compositing workflow and should stay there.
If you like to make custom node setups with many nodes for a key, you can go ahead still. But there will always be a limit, and that’s not because nodes are so badly coded. To achieve real superior key mattes I like to explore completely different methods, which require markers, tracking, reconstruction, temporal filters, etc. These kind of methods are shot based (on a sequence of images) not frame based (like composite nodes).
But; this is research, we will have to spend time on it. If there are much better frame-based keying solutions (to use in a compoiste node) I’d be happy to do that instead.
sure, but even those approach usually leads to results which are good to mix with other technics. I’m just afraid that having it to the clip editor is as considering it as a one-self-working-solution, which rarely happens.
For instance lately I have been doing loads of keying in a famous commercial compositor sofware. Most of the time you would do it with something that looks as magic as one of their famous plugin (one clic and you have a clean matte). But sometimes its just not enough, and you have to go old-school on the shot, or even better combine both old school and a 3rd party plugin as this famous one (I’m getting good at not giving names anymore huh ?! ;) ).
For instance this week, I have been doing a lot of modification (prefilter) to a footage (levels, cartoon (blur), contrast, saturation) before applying this keying plugin. Which at the end made this plugin works even better and had some pretty nice results.
The workflow with clip editor is pretty much the same issue with compositor and 3D view (this wall I have been talking about before). In a nutshell everything works fine, but when you want to do tricky stuff or mixing stuff together then it gets tricky !
Not sure I explain my point very well here :/
I guess I have never been a big fan of “clip editor” anyway :p
Will there be a possibility to route something from the VSE back to the Compositor? The current one way workflow RenderLayer>Compositor>VSE is limiting for some tasks. Would be a very nice feature!
(I’m dreaming of a Render Layer Node in the Compositor that has 2 checkboxes for compositing and sequence, just like the ones in the post processing tab of the render options ;-)
regarding the depsgraph. Is this plan still valid?:
http://wiki.blender.org/index.php/User:Ton/Depsgraph_2012
or is the plan ‘just’ to patch the current one?
That wiki doc is a proposal, but it didn’t lead to actions :) It’s a very tough topic for Blender to tackle.
I am hoping that RAM caching for the compositor is getting a priority as well. Sure Open CL will speed everything up, but it is not like it will play back your comp in realtime.
Not sure how serious work can be done without a decent caching system?
Yep. Compositor should be able to work on a shot (or at least render and playback).
I think light probing with a mirror ball would be a quick and easy way to get environment lighting on location. Probably not useful for rendering perfectly reflective surfaces, though.
Neill Blomkamp says it best in the behind the scenes of District 9 when he says getting good light information is key to getting good VFX. :D
I do not know if this will help, but there is a great book I always check out of my public library called “Special Effects: History and Technique” by Richard Rickitt; it has a whole chapter dedicated on the basics of traveling mattes and keying =]
Forget the cookies, I love the photos!!!