Brecht is checking on Open Color IO, he did a quick export which results in the jpgs below. Interesting but not sure if this is 100% correct, it feels too much contrasted. We keep digging :) This is what Brecht did:
-
setenv OCIO /path/to/ocio/aces/config_1_0_3.ocio
-
oiiotool –colorconvert aces rrt_srgb in.exr -o out.jpg
FIRST ;p pardon my ignorance by wat are aces gamut?
Nice read on ACES & color in general:
http://www.fxguide.com/featured/the-art-of-digital-color/
Thanks a lot for that Info Ton :)
After half an hour of reading, that was a lot :P and pretty raw way of explaining how ACES works lol. My poor brain :P
Anyway thanks a lot for this link, I kinda understand the important part of it and how it works etc. And to say that I already did 3 modules on the history of Cinematography and never heard talking of OpenExr or Aces or sRGB, its why i keep feeling like I wasted my money on this Uni all the time -__-
Tq mango team :)
:D excelente trabajo Ton, ¿cuando sale el trailers?
Te contesto ya que soy el único que habla castellano :P
Todavía no hay fecha estimada para trailer, ni plan de hacerlo pronto (yo no contaría con un trailer al menos por un mes).
Awesome!:)
But I think the shadows are very dark.
I agree. But the question is, is it due to wrong color spaces, or something for artists to tweak? I don’t have an answer for that yet.
My impression is that if you want a nice wide range of colors, there isn’t as much space left for dark areas, so it’s kind of a tradeoff. Looking at movie trailers, such dark areas seem not uncommon.
We already found out now that ACES and S-Gamut/Linear are not the same, and these EXR were exported as S-Gamut/Linear, which is giving too much red saturation with transform. So we’ll need to switch to ACES export:
ACES: http://download.blender.org/ftp/incoming/Mango/ocio-aces-to-rrt_srgb/A004C003_120507_aces_00000.jpg
S-Gamut/Linear: http://download.blender.org/ftp/incoming/Mango/ocio-aces-to-rrt_srgb/A004C003_120507_lin_00000.jpg
Also a test with our quadbot:
http://download.blender.org/ftp/incoming/Mango/ocio-aces-to-rrt_srgb/quadbot.jpg
I’m a bit confused. When I read stuff about ACES on the web, I got the idea that ACES is supposed to be the color space one works in before exporting to something that can be shown on a screen or printed to film or whatever. ACES would be the thing inside the OpenEXR container. When you export to jpg you have to map the ACES colorspace onto the integer RGB values of jpg. Or am I missing something?
ACES is in indeed a huge colorspace to work in, which you monitor and convert through a reference transform (RRT) and a kind of display LUT (ODT)
I guess what’s needed next is to apply the sensitometric characteristics of real film stock to get that film-like look. :D
I think its just the issue that the dynamic range of the camera is larger than can be shown on a regular monitor. Kind of HDRI which need to be tone-mapped to see all details on a regular screen, right?
Do you have a monitor that supports HDR content?
Yeah HDR footage needs (different) tone-mapping for film and regular displays I guess.
Is the ACES gamut applied in a destructive way ? Will you apply that gamut on the preview and still work with original files, or convert everything in ACES gamut before color grading and compositing ?
ACES is supposed to be a very wide color space, in which the input and output color spaces can all fit without data loss. Probably we will convert everything to ACES EXR files and do compositing and rendering in that space.
It’s a bit deceptive to talk about the “ACES gamut”, in practice it can be considered unbounded and should include the entire visible spectrum. With sRGB, Rec. 709, S-Gamut you clearly have a subset of that spectrum, but that’s not really the case with ACES.
I agree with you, ACES is a wider, only theorical, gamut. But you still have losses, don’t you ? You’re “downscaling” color data while converting your files into this gamut. ( Yes, I have some weird concerns… 8D )
I don’t know which kind of losses you are referring to then. Since this is stored as float/half-float, anything that is scaled down can be scaled back up again?
With a color managed app theres the benefit of view LUTs no need to do destructive changes anyway not until export to delivery format.
With regard to contrasty images how was exposure decided in camera is it simply all under exposed? Is it a white point thing?
Forgot to add, if exposure was judged through a rec709 LUT then conversion is done ti sRGB although they share same color primaries gamma curves differ at base in the shadows. Perhaps ACEs to rec709 conversion is the way to go.
Jeez would be good to have an edit post option. :-) Last thing, ACEs will be scene referred? sRGB is display refered. Going back to the gamma curve mention last post or can a dev combine these posts into one. :-)
The contrast in the image is now confirmed to be just the default look that we get from the standard ACES display transforms. Changing exposure makes a difference of course but doesn’t really change the look. Conclusion seems to be that it’s just up to us to grade this.
We uses sRGB output for testing since that is what monitors typically expect. If we do this then we should be able to export to rec709 for TV, and something else for film again, and have it look similar. If we start viewing rec709 on our sRGB monitors things get mixed up.
And the ACES color space is indeed by design linear / scene referred.
Rec709 and sRGB are almost the same, only the gamma curve is a little bit different. (and to face the truth: even the biggest professionals mix them up all the time)
But of course it’s a great to do things as correctly as possible.
Btw, since there’s also a P3 and XYZ DCDM ODT, wouldn’t it be great idea to create a DCP once in a while to let the team review their work in a real cinema? (reviewing on a computer screen is not the same as on a big screen with in a large colorspace)
brecht, its not about changing exposure now or default contrast of ACEs, its about exposure in camera, preview LUT whilst deciding on exposure at aquisition.
With regard to rec709 and sRGB monitor comment, we profile our monitors with a calibration device to establish the monitors characteristics, then create a LUT or ICC profile, use a color managed NLE / Compositor to view either rec709 or sRGB, we encode to HD video using a rec709 color matrix and transfer curve if going from linear and declare it in the stream, then use a color managed media player to view?
And or use a display device that has LUTs.
My understanding is that in converting the MXF to ACES EXR, the exposure, color temperature, etc is all read from metadata in the MXF file and neutralized. Isn’t that the point of the ACES color space, that you get a standard color space independent of particular camera or display settings?
You still have to judge exposure, ACEs is all about color gamut regardless of whether technically it covers the CIE 1931, the camera sensor is about dynamic range, linear is avoidance of skewing data in a non linear fashion into gamma encoding which could be into reduced bit depth. Exposure after aquisition will be perhaps max of two stops.
You still have to judge exposure, by means of on camera scopes, preview LUT or a color station again scopes etc. Stick your hand over the lens of a camera you cut out the light hitting the sensor, judge exposure incorrectly and ACEs and 16bit linear is not going to help.
I’m not familiar with this cam but assume we’re not at the point technically that we can ignore exposure outright because its ACEs?
Well, if you go from RAW to RGB, you always have to ”bake”. (the Sony software maps the RAW values to RGB space like ACES)
But ACES is a very flexible RGB format, so don’t worry too much. ;)
J, I’m not worried period, I was merely suggesting why the so called default ACEs to sRGB conversion was contrasty. To say its the default is daft, the sensor probably has a 12 stop exposure latitude and where you postion ‘best’ exposure is subjective there is no default.
If the reason was underexposure leading to contrsty image so what, room to recover.
And again what has ACEs or Linear got to do with exposure in camera? What has ACEs got to do with lighting or contrast ratio?
The Sony F65 has about 14 stops of DR.
Looking at the EXRs I think Joris the DP got the exposure spot on. One can always argue about to which lenghts you go to protect highlights or to prevent noise, of course, but I’m very pleased with what I’ve seen so far.
ACES is not just linear, from reading the specification I really do believe it is independent of the particular exposure used in the camera. I don’t see why it couldn’t be.
But more importantly, the high contrast is not a feature of the footage, it’s also there when applying the default ACES to sRGB transform on rendered images. This is just the default look that we get from the ACES RRT, we had this confirmed by looking at the ACES test images and feedback from the OpenColorIO developer.
If we skip the RRT, which is supposed to give a film-like look, the images have much less contrast. This is supposed to be just the starting point for further grading, it could be argued if that much contrast is a good starting point but that’s the standard.
brecht & J I understand all that, I’m not critizing or anything just commenting on reasons why there seems some surprise at contrast. In theory if the same transform was used at aquisition for exposure judgement and the same transform is used for conversion then there should be no surprises, we can assume the contrast ungraded look is as intended by the DP to get the best image for grading. Great and the imagery looks great when desaturated a bit. :-) Again subjective.
To say its the standard transform or default is the reason for the contrasty image is daft, contrast is determined at aquisition via lighting and exposure, previewed through a LUT it had to be, no way to judge exposure in linear light. So there should be no surprises.
And just because its 16bit raw linear is no magic bullet, linear is a more inefficient way to store light compared to LOG for instance but linear required for scene referred ACEs workflow and I’m not critizing anything but exposure still needs to be good. Not saying it isn’t just commenting on the initial surprise of how contrasty imagery is from team members. DP shouldn’t be surprised though should he?
Will be interesting to resolve this early, in the chances that the rest of the masses have access to similar equipment down the road.
What’s Ian say? Maybe the red of the S-gamut fits the tone of the production overall. Think the red tint makes things look a bit edgier and…..heated. But the ACES looks great, even with the greater contrast.
My color correction
with blender 2.62
http://151.0.img98.net/out.php/i342836_a014c005-120510000000.jpg
How will this very highly contrasted footage cope with colour correction at the end? Surely it’s better to have larger dynamic range in terms of luminance up front, and then only at the end crush the blacks? Or perhaps I’m not understanding how this works; is the dynamic range non-destructively maintained?
The originals remain floating point openexr files, the files we show (jpg) are derived from it. We’re not going to lose data at all, it’s just playing with color spaces and conversions, and ensuring we’re using the right high dynamic linear rgb data.
Thanks for the explanation, this whole process is very informative!
Internet moves faster than books
Find the same image from this post and last post. Place the desaturated one (last post) in the layer below. In the layer above place the one in this post but changing the gamma to 1.6 and the blending mode to 50% Normal. What you get is what I like.
A example:
The top one is as explained in the post above. The middle one is from this post. The one on the bottom is from the post some days ago.
http://img210.imageshack.us/img210/9004/colorsq.jpg
Actually the last one looks best to me, its a little undersaturated, but the other two are just too oversaturate.
Just for the atmossphere being all dirty and destroyed, post apocalyptic kind of, i think you should definetly be careful with too colorful shots.
I like the grayish color too ! This film admittedly need a dark athmosphère, in order to allow extreme contrast with explosion …
But my mind is unbalance cause of diablo III
The point is to have a good gamma and color footage before feeding that to blender. Then of course you modify in blender the footage desaturating, darkening, all the postproduction you want.
Hey guys!
Looking at the frames, a question popped into my head; will the DoF blur increase the difficulty of working with green screen? How will this be approached?
I’m excited watching this film come together. Keep up the great work guys! :D
Semi-transparent areas are indeed challenging, but with hair we already have to handle those anyway. The keying / despilling just has to be good enough to handle this, can’t really avoid it.
Hi, would a new “keylight” node for blender be available soon. Cause today, it’s possible to do a good keying (not perfect) in blender but its realy long and it cost lot of node (different chroma key, erode, blur, mix, alpha and even difference fom rgb separate node). And the desplitter is fine but not perfect.