ANNEX IV. COLOUR THEORY

O. FERRER-ROCA

University La Laguna. Tenerife.Canary Islands.Spain

    The present annex tries to bring complementary information on the aspects related with colour capture and manipulation as well as possible influences that could modify it, in the aspects of visual perception, image analysis and densitometry.

IV.1. INTRODUCTION

    Light can be divided into achromatic and chromatic. The only attribute of Achromatic Light is INTENSITY whose scalar measure produces the Grey Levels.

    On the contrary Chromatic Light spans the electromagnetic energy spectrum from 400 nm (Blue area) to 700 nm (Red area) being the ranges over this in the infrared non-visible spectrum. The three basic qualities link to chromatic light are

RADIANCE (measured in Watts).
LUMINANCE=Y (measured in Lumens (lm)), e.g. luminance in the infrared area =0. Is defined by the Commission Internationale de lŽeclairage (CIE) as the radiant power weighted by a spectral sensitivity function that is characteristic of vision. Its magnitude is proportional to power and therefore is like intensity. Nevertheless the spectral composition is related with the brightness sensitivity of human vision.
BRIGHTNESS, a subjective descriptor not possible to measure, that embodies the achromatic notion of Intensity=I. Is defined by CIE as the attribute of a visual sensation according to which an area appears to emit more or less light, therefore is a non-linear function of Luminance. Human vision has non-linear response to brightness. This perceptual response to luminance is called by the CIE as LIGHTNESS, being roughly logarithmic for human eye.

    IV.2 LIGHT COLOURS AND COLOURS OF OBJECTS

    IV.2.1. LIGHT COLOURS

    Due to the anatomical structure of the eye, colours are seen as combination of the three primary colours: RGB=Red, Green, Blue. According to the CIE they are located on B=435,8 nm, G=546,1nm; R=700nm. Those primary colours are mixed to produce secondary colours: Magenta (RB), Cyan (GB) and Yellow (RG).

Combination of equal energy primary colours R(x)+G(y)+B(z)=1, gives a white light.

    IV.2.2. COLOURS OF THE OBJECTS

    It is also important to clarify the differences between light colours and colours of the objects, since the property of the objects (pigments) is that substrates do absorb a primary light colour and reflect or transmit the other two. Therefore the primary pigments are MCY and the secondary pigments are RGB.

    Colours of the objects are distinguished by three characteristics (IHS)-:

BRIGHTNESS; I=intensity or brightness
HUE; H=dominant wavelength perceived by the observer
SATURATION; S=indicates the amount of white light mixed with the Hue

    The combination of H & S is called CHROMATICITY, therefore a colour can be simply defined with two parameters Brightness and Chroma (B-Ch). In general the latter include two signals ChA (from green to red) and ChB (from blue to yellow).

    Since R(x)+G(y)+B(z)=1, only x and y are needed to determine the chroma, and z is obtained from the previous equation.

    This means that colour can be reduced to LUMINANCE (Y) and Chroma (UV) also known as YIQ (Luminance, inphase and quadrature) whose conversion is as follows:

Y=0,299 R + 0,587 G + 0,114 B
U=0,596 R - 0.275 G - 0,321 B
V=0,212 R - 0,523 G + 0,311 B

    IV.3 TRIDIMENSIONAL REPRESENTATION OF COLOUR

    A colour Objet can be de-composed into three digital images: Luminic information (A) + 2 signals containing the differential visible spectral colour (C1; C2).

Achromatic signal (A) =log R + log G + log B
Chromatic signal (C1 & C2) C1=3/2 (log R- log G)
C2=log B- 1/2(log R+log G)

    In the tridimensional representation scheme the LUMINOSITY (A) or LUMA is the projection of a pixel (q) in the y axis.The SATURATION (S) or colour purity, depends on the amount of white contained in the pure colour and is the vector of the pixel q in the z-x plane (S=sqr (C1 2 + C2 2)). Finally the HUE (H) is a colour attribute expressed as the angle between the x axis (C1) and the S vector (H=cos (C1/S)).


Figure IV.1. Tridimensional representation of Luma (A), and chromatic signals (C1, C2).

    IV.4. COLOUR SPACES

    The ANSI work formed the draft IPI-CAI Common Architecture for Imaging International Standards that identifies the colour spaces. And the ISO/ANSI work provides a standardised methodology for describing the interrelationships between these standards

Table IV.1. Colour Spaces.

Standard Type Colour spaces
YES CIE XYZ
    Yxy
    UVW
    Yuv
    L*a*b
    L*u*v
  Linear RGB//and Gamma RGB CCIR-709
    NTCS
    EBU
    SMPTE
  Luminance-Chrominance YIQ
    YUV
    SMPTE YCrCb
    CCIR-709 YCrCb
    EBU YCrCb
NO others RGB
    CMY
    CMYK
    IHS

    IV.5. RESPONSE OF THE DETECTORS.

    Video systems are build in such a way that try to approximate the lightness response of human vision, that means a logarithmic response. The linear-light intensity is transformed into a non-linear video signal by gamma correction (see Chapter 2- Displays) because the vision response to intensity is effectively the inverse of a CRTs nonlinearity.

    For example for a R=Linear light intensity and RŽa non-linear component such as the voltage in video systems:

RŽ=4,5 R for R< 0.018; RŽ=1.099 R 0.45 - 0.099 for 0.018 < R;

    IV.5.1. VARIATIONS IN GAMMA CORRECTION

    Gamma correction vary according to the outputs

1.- The video systems effectively codes into a perceptually uniform domain. A 0.45-power function is applied to the camera for gamma correction.
2.- Synthetic computer graphics calculate the interaction of light and objects. It is conventional in computer graphics to store linear-light values in the frame buffer and introduce gamma correction at the look-up table at the output of the frame buffer.
3.- Desktop computers are optimised neither for image synthesis nor for video. They have programmable gamma with either poor or no standards. Consequently image interchange among desktops produce different results. And particularly are not suitable for medical image applications if gamma correction cannot be standardised.


FIGURE IV.2. Original image on the left, transported to another platform on the right. Image provided by Pedro Arconada with permission of IDG (http://www.idg.es/iworld)

 


FIGURE IV.3. Gamma correction effect produced by the different devices. Taken from Ch.Poynton, 1998 at http://www.inforamp.net/~poynton/notes/colour_and_gamma/GammaFAQ.html

IMPORTANT NOTE

  • If we have to do image processing in an image captured through a video camera it is necessary to remove the non-linear gamma correction to convert the image data into its linear-light representation.
  • On the contrary, if the computation involves human perception (i.e. visual diagnosis: x-ray, pathology diagnosis etc...) it is required to have a non-linear representation. That is the reason why images containing 8 bits per pixel could be sometimes sufficient reproduced throughout a video-displays but that for linear-light intensity images 12-14 bits may be required to achieve a high-quality image reproduction.

    Specifically, if an image originates in a Linear-light form, Gamma correction needs to be applied exactly once. Some of the problems that may appear if we do not take into account the previous premise are:

a) If we do not apply Gamma correction and the image data is applied to a CRT (display), then the midtones will be too dark.
b) If Gamma correction is applied twice the midtones will be too light.

    Furthermore in JPEG and MPEG standards there is no mention of transfer function but non-linear (video-like) coding is implicit, which make those images not suitable for linear-light data manipulation. Standardisation of the transfer function is necessary in order that image formats meet the usersŽs expectations.

    IV.5.2. DETECTOR RESPONSE

    All previous statements also imply an optical lineal response of the detector, that have to be tested in any case, under appropriate conditions. As shown in the previous figure, video voltage can be considered linear in a Video camera but linear intensity is only obtained if their is no gamma correction because this is a non-linear transformation. Linearity of the optical systems is a sine-quanon condition for densitometric measurements.


FIGURE IV.4. Grey values given by the Texcan system [1] using a B/W CCD camara and B/W frame grabber, in front of a densitrometric slide test (optical density) and the internal optical density values of the system obtained by software [3] (Texcan).

    IV.5.2.1. LIGHT SPECTRUM SENSITIVITY

    The problem is even more complex if we consider that the detectors present on the video cameras do not have, up to now, the same sensitivity throughout the whole light spectrum and therefore modify the eye colour detection that also have limited sensitivity as shown in the figure IV.5.

    Microscopic spectrum of Blue (400-510 nm), Red (590-660) and Green (515-560) is covered in most of the pathology images that are mainly based on H-E (Hemathoxylin-Eosine=B-R). Therefore, any modification of the white balance of the camera or display (point of equal energy for the 3 primary colours) modify the colour response.


Figure IV.5. Sensititivy of the detectors

    This, together with the fact that CCD cameras as well as analogue cameras (except vidicon) are less sensitive than the eye on the blue region, produces colour aberrations when compared with eye perception.

    IMPORTANT: Please note that each vendor camera have different sensitivity spectrum, and only few vendors provide the response curve of their sensors

    IV.5.2.2 DENSITOMETRY ASPECTS

If=Final intensity
OD=- log T=- log (If/Io) OD=Optical density
Io=Original light intensity of the background

    Obviously each colour has its maximum absorvance at a specific range of the light spectrum, therefore for the densitrometric analysis the maximum absorvance of the pigments should be known in advance. That is the case in Pathology slides stained with different dyes for histology recognition or quantitation, whose maximum absorvance spectrum (with a narrow band filter of 20 nm wave length) using a stabilised illumination provided by an halogen lamp of 100W are summarised in the following table [2].

Table IV.2. Absorvance peak of the most common histological dyes

STAINING (DYE) Maximum absorvance
Toluidine Blue

640 nm

Feulgen

560 nm

DAB (Diamino Bencidine)

547 nm

Gallocianine

580 nm

Haemathoxyline alone

600 nm

Haemathoxyline-Eosine

530 nm

Haemathoxyline-Light Green

635 nm

PAP standard

545 nm

Thionine

570 nm

Methyl Green

660 nm

    Under those premises the densitometric measurements of transmitted light (Transmission densitometry) work in the ACHROMATIC space in which only grey values are considered.

    Nevertheless the Surround Effect or light present around of the object pays an important role. For example a white area around a dark dense object may favour the visual contrast but the bright light coming from the surroundings decrease its densitometric measurements (Glare effect on the borders). This is controlled in the video environment for visual perception applying a power function (in general 0.45) with an exponent of about 1/1.1 or 1/1.2 to correct the bright surround. Recently some image format (i.e. TIFF 6.0) incorporates a tag that includes an appropriate transfer function for the viewing environment.

    The Contrast ratio or relation between the brightest white and the darkest black of a particular detector is very important for densitometric devices. It differs from displayed images affected by environmental lights (Projected cinema 80:1 in a dark theatre, TV 30:1 in the living room, CRT in an office environment 5:1). In Black and white B/W capture systems, it is manipulated by the offset and gain control that adjust the dynamic range of the capture system to an artificial LUT that fixed the detector offset (acquiring a black image) in the 0 values and the detector gain (acquiring a white image) at the maximum level (i.e. 255 in 8 bits LUTs). In true colour framegrabbers, the pixel value of each colour in the frame buffer is mapped through one of three (RGB) lookuptables of 8 bits (0-255); each mapped value plus or minus de black level (offset) error is proportional to voltage.

L=Luminance
L=(V+epsilon) gamma V=Voltage
epsilon=Black-level offset

    while the gain is indirectly corrected doing a white balance of the colour in the video-camera.


FIGURE IV.6. Plot of the densitometric measurements (Y) express as ISC (Immuno Score=sum of densitometric values over a background level) versus real steroid receptors in the breast tissue (X) in fmol/mg of protein obtained by biochemical measurements.

    Furthermore, in densitometric measurements the saturation level of the detector is linked to the impossibility to distinguish variations at the black or offset level (see also Chapter 1- x-ray laser scanners). This is particularly important if we would like to quantify in real units and not in arbitrary units. As an example the figure IV.6 shows how a CCD detector can be saturated by the darkness of the high substrate content, giving a logarithmic response with a plateau over a given content.

References

  1. Ferrer-Roca O., Martin-Rodriguez JA. Informatics in Pathology. Software develop by the Texcan-Group. Advances in Analytical Cell Pathology. Amsterdam 1990.
  2. Ferrer-Roca O. Analisis de Imagen (II). Aplicaciones. Universidad de La Laguna 1990.
  3. Ferrer-Roca O. Ramos A., Diaz-Cardama A. Immunohistochemical correlation of esteroid receptors in 206 brast cancer. Validation of telequantification based on global scene segmentation Anal.Cell Pathol. 9:151-163, 1995.
  4. Ch.Poynton 1998 at http://www.inforamp.net/~poynton/notes/colour_and_gamma/GammaFAQ.html.

Back to the T.O.C.


(C)Catai's Webmaster-1998
Last Edited on: 10 noviembre 1998