There’s now a page with some basic explanation about Mango color spaces on the wiki, and a package of frames in 2K in a few different color spaces (about 250MB). There are linear EXR’s with ACES, S-Gamut and Rec. 709 chromaticities, and sRGB PNG images with and without the ACES RRT (film like look) applied.
If you want to try grading in a current version of Blender, the linear Rec. 709 EXR files are the ones to use, since those are in the color space Blender expects. the same usually goes for other applications without ACES support. Below are some images without any film like look, and without tone mapping applied to deal with bright colors, just the result you see when loading the Rec. 709 EXR files in Blender.
Some things we learned:
- Not all linear color spaces are the same, the chromaticities can be different, and it’s important to match those up! After converting to the right linear color space, we can load up the EXR’s in Blender and view them with the same colors we saw on set.
- We can do rendering, compositing and grading in the standard Blender linear color space, and still use ACES footage and deliver ACES as long as we use the right transforms, without any loss in quality. This helps us because our existing .blend files are all assuming this color space, and it’s no fun having to change that halfway the project.
- We’ll need OpenColorIO integration if we want to take advantage of the ACES film like look, and to make it easier to export to various display devices (monitors, HDTV, projectors, ..). And of course to make it possible for other users to set up a proper color workflow without the need for batch converting image files before opening them in Blender.
- Converting a render for display on a monitor is not only about using the correct color transformations, but also artistic choices. We’ll need to do some experimenting to figure out the default look we like, as a starting point for grading in all the shots.
Don’t you hate it when people comment about being the first to post a comment…
You don’t wanna know… ;)
What kind of batch tranform did you use to convert the ACES EXRs to the REC709 EXR?
The reason that you don’t have any information loss is because EXR allows out-of-gamut values?
It’s a custom OCIO transformation, using the ACES to XYZ and XYZ to Rec. 709 matrices. Will be released of course.
And yes, there is no information loss because EXR (and the Blender compositor) allows out-of-gamut values.
Brecht, that doesn’t sound correct.
Since Rec.709 is a smaller gamut space all the XYZ values that sit beyond Rec.709 gamut will have to be remapped to in-gamut values using some rendering intent.
The out-of-gamut colors have to be tranlated to a displayable color. That’s what colorimetric and perceptual intents do.
If you go back to ACES you’ll have ACES gamut as overhead, but the colors of the converted image will be constrained to the smaller space’s gamut.
For instance, if you had a saturated green shade in ACES that doesn’t fit the Rec.709 gamut it will be translated to the nearest in-gamut color wqhen you convert it.
That shade of green will be of course inside ACES gamut, and if you convert back to ACES it will stay there because the rendering intent won’t have to re-map it.
If the OCIO transform doesn’t remap gamut and keeps those out-of-gamut values isn’t performing a colorimetrically correct transformation. It will be just displaying the original information assuming a different set of primaries, which means that what you see is no longer what you’ll get. Bad idea.
Oh, and btw, I’m very glad to see how this discussion evolved. All the concepts (except the loseless gamut roundtrip thing) feel just right, and it shows in the sample images. Good saturation, natural look, and the visible noise was greatly reduced!
Regarding the gamut loss, I’d recommend to take a deeper look. The gamut loss won’t be a a critical problem for the final project (I guess that working in rec.709 with enough precision will suffice in most of the applications), but if the idea is to keep as much color information as possible, choosing a small gamut space as working space isn’t the best idea.
I think that rec.709 will suffice for now, but It’s a good oportunity to clarify this and get it straight: if you want to keep the color information of the original footage your working space has to be equal or wider gamut. Of course outputs and previews can be smaller, but the internal space has to have room for the original gamut if you don’t want to lose color.
”If the OCIO transform doesn’t remap gamut and keeps those out-of-gamut values isn’t performing a colorimetrically correct transformation. It will be just displaying the original information assuming a different set of primaries, which means that what you see is no longer what you’ll get. Bad idea.”
Now read this: http://opencolorio.org/configurations/spi_vfx.html ”It is absolutely critical to guarantee that process – end to end – is colorimetrically a no-op. Under no circumstances are any unintended modifications to the original image allowed.
Thus, this profile uses very simple (1D) conversions for all input and output color space conversions. All of the complexity (the 3D LUT film emulation lifting) is handled at display time, and is never baked (or unbaked) into the imagery. For visualization, this profile includes a generic Kodak Vision print emulation suitable for display on a reference sRGB monitor or a P3 Digital Cinema projector.”
Do you disgree with the before-display (compositing) path?
Brecht’s REC709 EXRs seem to match quite well with the LUT mentioned in the last quote, I tested it myself. (and with this ”film” LUT applied the out-of-gamut values get nicely mapped in a filmic way, just like in the ACES RRT transforms)
@J.: That’s interesting. Thanks for the link.
I have to admit that I’m not used to this kind of workflows and it’s kind of new to me, so I couldn’t agree or disagree before reading more about it, but if Blender will do what’s described in that link I guess it’s fine.
I’d like to see if the roundtrip is effectively loseless. Have you tried exporting one of those rec.709 EXRs from Blender and convert them back to ACES through OCIO?
Brecht hasn’t released his transform matrices yet, so this question will be forwarded to Brecht.
Brecht? :)
Because we are working with floats, out-of-gamut colors are preserved as well. The conversion between these linear color spaces is a simple, invertible 3×3 matrix transformation with no data loss.
If you do a transform back to ACES and do a 1:1 compare to the original ACES files from the Sony software, is there a difference?
There is no difference in that case.
Looking good! And thank you for sharing your research and experimenting in colour spaces. It’s been quite informative.
I did inquire about color calibration on screens you use.
Now, whilst you don’t calibrate the monitors as of yet, and there may be no need to, you can calibrate you TV screens, by using additional items of some dvds, blu rays.
THX.Com details what dvds come with the additional ability to help calibrate your TVs sets. Applicable to Computer Screen?
It may be off some use to get all screens to some extent calibrated.
regards
TFS
Good luck calibrating computer monitors, they aren’t well known for accuracy and consistency. I would guess that you will need a test pattern image to pull up in compositing or the image viewer to check values (especially darks) against.
I thought in relation to computer monitors, especially IPS variants calibration was possible, especially with products like Spyder?
Again the THX certified DVD,s have test images, to enable change to Contract, sharpness, tint, etc…..
Its my impression that people working in the industry do in fact choose and calibrate monitors for consistency and accuracy, is this not the case?
They have Eizo monitors (very color accurate) at the Blender Institute, but they don’t maintain the calibration/profiling. (which isn’t that difficult in Linux anymore, it’s even quite easy)
The only thing you need to buy/rent/borrow is a compatible measurement device.
At the very least do the color grading on a calibrated/profiled screen. That’s very important!
I’ve been investigating color spaces on two calibrated monitors, and we’ll do the grading on a properly calibrated monitor.
Up to now the difference between the right and wrong transforms has been so obvious that it wasn’t even needed, but when the creative tweaks come in I expect those will be more subtle.
It should be noted that calibration and characterization are two separate but related things.
Calibration is the process of getting the device to achieve target values such as brightness, contrast,viewing environment etc, and should be checked prior to any grading session. Characterization is the measuring of a calibrated device and how differs from the idealized standard.
See http://www.argyllcms.com/doc/calvschar.html
how do you expect doing nice and fine Color Grading if you don’t calibrate your monitor ? :p
A good calibrating device will also take the environement light of the room to compensate. But usualy you would calibrate your color in the same kind of environement as the destination. so if your looking to grade for theater/cinema, no lights in your room ;)
I don’t understand what you guys are talking about but it sounds interesting. Hopefully when the Mango DVD comes out I will not be this ignorant by then. Apparently I still have a lot of research to do.
nice documentation, very clear !
Why do it need to calibrate colors at all? Different output device produce different colors, and color perception depends on many factors (as someone said) like environment lighting, vision adaptation, influence of some chemicals to viewer perceptions.
Let me put it this way:
The best way to watch Mango will be as DCP in a good cinema, yet most people will watch it on YouTube. But luckily that doesn’t keep the Blender Institute from making a cinema-quality production.
The reason that not everyone will watch it in an ideal way is not a reason to be sloppy with your color workflow.
@Brecht: Are you also gonna test the P3 RRT and make a Digital Cinema Package out of it?
And how will the sound designer sync up his work with the edit in the Blender Sequencer?
Sounds logical. Hope it will be an opportunity to see move in cinema.
Regarding a digital cinema package, we haven’t discussed that, don’t know.
I’m also not sure about the sound design workflow for Mango, have not been involved with this. I think on some previous projects the sound designer had access to our svn and the sequencer .blend file with the edit, so that helps to stay up to date.
Does this mean that the sound designer must guess where the sync audio comes from by looking at the VSE? That doesn’t sound to timecode accurate. Exporting an industry standard XML edit list would be helpful there.
I was referring to previous projects, where we had no actual footage with audio included, I have no idea about the workflow for this project, someone else will have to answer that.
Very cool to see Blender moving toward adopting OpenColorIO and the Academy IIF! Even without full OpenColorIO integration, it would be a huge step forward for vfx artists if Blender were to add the ability to set a generic 3d lut (plus 1d pre-lut for lin->log) in the viewer, so we can render and comp through a film look while working in the scene linear colorspace.
I’m not ready to give up my Houdini and Nuke just yet, but with all these new features (Cycles, camera tracker, sophisticated color pipeline), I can see Blender becoming quite a useful tool for film vfx professionals.
Note that typical screen calibration alone will not display accurate sRGB (or whatever working color space), especially on a wide-gamut monitor (this is due to the fact that, in most cases, calibration works with 1D LUTs, and thus do not adapt the RGB primitives in question wrt. Gamut). In order to get accurate sRGB colors, you will need to profile your calibrated screen, and use a Color Management System to translate colors from working RGB to monitor RGB. Blender itself does not have this capability yet.
However it does not need it (at least with Compiz/Linux). There is a project called Compiz-CMS which automatically does this conversion using hardware shaders at a window level (with a possibility to opt-out individual color managed applications).
On a side note, is it just me or does the movie industry ignore/snob the established standards for representing color displays (ICC profiles).