8 Replies Latest reply on Mar 9, 2014 10:14 AM by Alp Er Tunga

# Camera calibration tab and camera color space

Hi, I need a confirmation about the hue and saturation sliders on the camera calibration tab. They are there for fine tuning the camera color space primaries in a DNG profile? Thanks a lot.

• ###### 1. Re: Camera calibration tab and camera color space

As I'm not a programmer, I can not read the source files in the DNG SDK. So, I need help for understanding two points conceptually. Thanks a lot in advance.

1. There is a step in the DNG workflow called "linearization" for constructing 16 bit CFA image from the digital data coming from the sensor. I would like to learn the calculations in this step, a simple numerical example would be very helpful.

2. After demosicing step, we have RGB file with three channels in the camera native space. I would like to learn how we decide the primaries of this camera native space for converting it to the CIEXYZ or CIELAB.

And additionally, which color space ACR uses as PCS, CIEXYZ or CIELAB?

• ###### 2. Re: Camera calibration tab and camera color space

Hello, here I'm again.

I'm probably in the wrong forum, because I can't tell you anything about the DNG secrets

(though I have the specs).

About your second question (more an issue for a color science forum), I've found something:

Estimation of the primaries for a digital camera:

(1)
Concerning the Calculation of the Color Gamut in a Digital Camera
Francisco Martínez-Verdú et al.
(2)
http://docs-hoffmann.de/leastsqu16112006.pdf

Given is an equation

(*) X = C R

We have two applications:

(1)  Original question

X = Matrix  X1 X2 ... Xm
Y1 Y2 ... Ym
Z1 Z2 ... Zm

m=41 values in each row for CIE CMFs (color matching functions) 380nm...780nm

R = Matrix  R1 R2 ... Rm
G1 G2 ... Gm
B1 B2 ... Bm
41 values in each row for the sensor sensitivities for red, green and blue CCD array elements

Matrix C is found by solving equation (*), using the pseudoinverse.
Matrix C contains the XYZ-coordinates of the primaries red, green, blue as columns 1, 2 and 3.

X = Matrix  X1 X2  ...  Xm
Y1 Y2 ... Ym
Z1 Z2 ... Zm
m=24 values for the GretagMacbeth ColorChecker

R = Matrix  R1 R2 ... Rm
G1 G2 ... Gm
B1 B2 ... Bm
24 values in each row for the RGB results als delivered by the camera, for instance in sRGB.
Matrix C contains the XYZ-coordinates of the effective primaries(including sRGB)  red, green, blue as columns.

(2) explains the principle of the pseudoinverse for overdetermined systems of linear equations.
(1) explains the application for the identification of camera primaries for given sensor sensitivities.

Unfortunately, the whole concept will fail, if the camera sensor sensitivities (CMFs) are not
linear combinations of the CIE CMFs x-bar, y-bar, z-bar. In this case, a camera does not have
primaries, but the computational result can be used as an approximation.

CMF = color matching function (CIE)

I regret –it cannot be explained 'for the layman'.

Best regards --Gernot Hoffmann

• ###### 3. Re: Camera calibration tab and camera color space

Hi Gernot,

I've also found some articles and with the addition of yours, I need some time to read and digest all of them.

Thanks a lot for all your helps.

I have some math background, but I'm not so interested in the math side of this subject.

I just want to understand it at the conceptual level.

Can we not summarize what is happening at this stage in 4-5 sentences?

• ###### 4. Re: Camera calibration tab and camera color space

Hello Alper Tonga,

let me begin with a recommendation: please read papers diagonally, where Sabine Süsstrunk

is author or co-author. This guarantees excellent quality. For instance this one:

http://infoscience.epfl.ch/record/182538/files/egpaper_final.pdf

If we refer to a certain document, I might be able to explain this and that. Please ask here.

The issue cannot be summarized in a few sentences, but trying this isn't wasted time:

An input device, camera or scanner, contains three types of color sensors, one for R, one for G

and one for B. Each sensitivity function (versus wavelength) looks like a narrow gaussian bell.

These shapes may overlap or not. Probably this sounds very reasonable – and it is, for instance,

if we had to reproduce once and for ever offset prints. This 'densitometric device' could be

calibrated by test images for this application.

For real world photos this doesn't work. The scene shows colors which can be uniquely

described by CIE XYZ. The camera should be able to reproduce these colors in XYZ.

The only accurate solution to this problem was discovered by Luther, the Luther condition:

Original:

http://docs-hoffmann.de/luther1927.pdf

Translation into English:

The Luther condition: The three sensitivity functions have to be the CIE color matching functions

(CMFs), or linear combinations of those. x-bar, y-bar and z-bar are all positive, thus they can be

used for sensors, but it turned out, that this choice leads to severe inaccuracies (the sensor

covers the whole horseshoe diagram area, the x-bar and y-bar curves are widely overlapping

and the transformation into common RGB-spaces suffers from noise).

Linear combinations have mostly negative lobes. As a compromise, these are clipped and the

remaining behaviour isn't exactly colorimetric, but altogether not bad. In any case it's nearer to the

Luther idea  than the densitometric solution.

Therefore it's possible to solve this mentioned equation X = C R for C and extract the three columns

as approximations for the primaries in XYZ, or, how I would say now, calculate the three Pseudoprimaries.

Just this is explained in the article by Sabine Süsstrunk.

Best regards --Gernot Hoffmann

• ###### 5. Re: Camera calibration tab and camera color space

I've just come back to home and I was reading your first post and looking at the articles with more attention. I think I get the idea, but need to read all the referenced materials here before saying anything. Thank you so much.

• ###### 6. Re: Camera calibration tab and camera color space

My summary is here ))

The digital cameras are another type of animal and so they have different spectral sensitivities from human, they see the world different from us a little bit.

We need to map from their color vision to human vision which is specified by CIE colorimetry.

There are two species of this animal.

1. If we have a camera whose spectral sensitivities are absolutely linear combinations of human CMFs, then with a 3x3 conversion matrix, we can exactly calculate its primaries with the help of some patches whose Lab values are known previously. In this case, is it possible to derive the coefficients of the conversion matrix from a single patch?

2. If we have a camera whose spectral sensitivities are not linear combinations of human CMFs, then again with a 3x3 conversion matrix, we can not exactly calculate but we can estimate its primaries approximately with the help of some patches whose Lab values are known previously. But, in this case we need more patches for approximating the coefficients of the conversion matrix by the least squares technique. The more patches, the more correct mapping.

• ###### 7. Re: Camera calibration tab and camera color space

I would like to confirm your opinion – you're right.

If the spectral sensitivities are linear combinations of the CMFs,

then the conversion matrix is the exact solution, as you said.

But this matrix contains 9 unknowns, which are not correlated.

Therefore one needs at least 9 patches.

If the conversion matrix is only an approximation, quite normal for

commercial cameras, one needs more patches.

GMB ColorChecker has 24 patches. As shown in one of my documents

the linear approach isn't very fruitful, whereas a nonlinear camera

color calibration by GMB ProfileMaker delivered rather good results

for the reproduction of paintings (opposed to 3D real scenes, where a

single target isn't of great value, in my opinion).

Best regards --Gernot Hoffmann

• ###### 8. Re: Camera calibration tab and camera color space

One more point is clear for me now. Thank you.

I would like to ask a specific question about the camera calibration tab, found in Lr and ACR and also found in DNG Profile Editor.

There are hue and saturation sliders on this tab and it is obvious that these sliders are there for fine tuning the estimated camera primaries.

I'm playing with these sliders for some time and when the saturation sliders are all set to the left end (to the zero saturation point), I expect to get a grayscale image.

But when the saturation sliders are set to left end for all primaries, I've realized that I'm always getting a little bit blue-magenta tint on my photos.

Can there be any colorimetric reason for this behaviour?