EMoR: Explained for MoRons
Support Ukraine

EMoR: Explained for MoRons

A simple explainer of the EMoR model and its use in image processing.

1. Overview

A general problem in image processing is for different parts of the system to agree on just what the image data means. There are basically three options:

  • Output-referenced: The pixel values are what you'll give to the graphics card to display. This is what is stored in your average JPG file.

  • Sensor-referenced: The pixel values correspond to the actual amount of light that hit the image sensor captured during the exposure.

  • Scene-referenced: The pixel values correspond to the absolute light intensity.

If you have two photos of the same scene, taken at two different exposures, the output-referenced pixel values will be different and the photos look different when you display them, the sensor-referenced data will be different because of the difference in exposure, but the corresponding pixels would still have the same scene-referenced values since they correspond to the same light intensity.

In order to work on images captured with different cameras or with different exposures the image values must be brought to a common reference, and for simplicity we can choose the sensor-referenced option. The Computer Vision Laboratory of Columbia University has such a method that is described in [EMoR]. This article is essentially an explainer of that paper for those who "just want the code".

It's important to note what EMoR doesn't do: it doesn't correct for any change in brightness caused by the optical system in front of the sensor. Vignetting, for example, is not handled by the model. It is however, easier to model vignette correction using sensor-referenced values.

2. Goal

At the end of this, you'll have a two functions - emor and invEmor that transforms from sensor-referenced data to output-referenced data, and from output-referenced data to sensor-referenced data:

double sensorValue = invEmor(jpgValue);
// Do something with the sensor value,
// for example, halve the brightness
sensorValue = sensorValue / 2.0;
double newJpgValue = emor(sensorValue);

3. Model

The "model" consists of a set of curves - one base curve and 25 corrections to that base curve[1], where the first correction is the biggest and subsequent corrections make more and more subtle corrections. What this means is that you can choose how well you want to model the sensor response: if all you want is a very rough transformation, use the first three corrections. If you want better, use five.

The sensor is describe using these corrections. Specifically, how much of each correction is needed. If you have a sensor with five correction values, for example -3.3, 2.9, 0.72, -0.31, and -0.45, then that means that your final curve should be the base curve minus 3.3 times the first correction, plus 2.9 times the second correction, etc. This will be more clear once we get coding.

Start by downloading the model data (emor.txt)[a]. The data contains a total of twenty-seven lists of 1024 values, named E, f0, and h(1) to h(25). You want to turn these into a pair of double[1024] arrays (the E and f0 values), and a double[25][1024] array for the h(x) values.

double E[1024] = { 
    0.000000e+000, 9.775171e-004, 1.955034e-003,
double f0[1024] = { 

double h[25][1024] = {
        ...h(1) data here...
        ...h(2) data here...
        ...h(25) data here...

We can now use this to create a 1024-entry lookup table that tells us the output-referenced value given the sensor-referenced value:

std::vector<double> parameters = ...;
std::vector<double> lookup;
for (int i = 0; i < 1024; ++i) {
    double v = f0[i];
    for (int j = 0; j < parameters.size(); j++) {
        v += parameters[j] * h[j][i];

The above computes the response curve to the precision given by the number of entries in the parameters vector. Due to the curve being an approximation it has two problems that keeps us from using it as-is:

  1. It's not monotonic. As the input value goes up (more light hits the sensor), the output value (the sensor's response) goes down.

  2. It's not normalized. There are output values less than zero and greater than one.

4. Making it Monotonic

To fix these issues, we simply clamp the curve to the interval 0.0 - 1.0, and clamp values so that each value is less than or equal to the next:

if (lookup[1023] > 1.0) {
    lookup[1023] = 1.0;
if (lookup[1023] < 0) {
    lookup[1023] = 0.0;
for (int i = 1022; i >= 0; --i) {
    if (lookup[i] > lookup[i + 1]) {
        lookup[i] = lookup[i + 1];
    if (lookup[i] < 0) {
        lookup[i] = 0;

We can now use the lookup vector to translate sensor-referenced values in the range zero to 1023, and get the output value from the sensor as a double in the range 0.0 - 1.0. We can then map the output range to, for example, 0 - 255 for 8-bit output.

But what's the input range? The answer is that it can be anything, as we still haven't fixed the input range to be scene-referenced. But mostly this doesn't matter.

5. Inverting

Typically we start with an output-referenced image and want to make it sensor-referenced in order to perform some correction on it. This is done by inverting the lookup table we created in the previous step. There is an inverse response model[b] available but I haven't been able to make it work, we'll use a naive inversion:

std::vector<double> newLookup;
for (int i = 0; i < 1024; ++i) {
    // We want to find the input for where
    // the output is this:
    double y = i / 1023.0;
    // Find the index of the smallest lookup 
    // entry that is greater than or equal to
    // the value we seek (y). Our corresponding
    // x will be the 0-1023 input range remapped 
    // to 0.0 - 1.0.
    double x = 1.0;
    for (int j = 1; j < 1023; ++j) {
        if (lookup[j] >= y) {
            x = j / 1023.0;
lookup = newLookup;

6. Usage

Using the lookup tables requires you to remap the input to the interval 0 - 1023, and then remap the output to whatever you need. For example, using 16-bit values for computation, you'd do:

// JPG is 8-bit, so we map 0..255 to 0..1020
double sensorValue = invEmor(jpgValue * 4); 
// Do something with the sensor value,
// for example, halve the brightness
sensorValue = sensorValue / 2.0;
// sensorValue is 0..1 which we map to 0..1023,
// and the output of emor (0..1) is mapped back to 0..255
int newJpgValue = (int) (emor(sensorValue * 1023) * 255);

7. Obtaining Coefficients

The easiest way to obtain coefficients for a sensor is to take some photos and use Hugin's[c] Photometric Optimization - Camera Response[d]. The Values for h(1) to h(5) can be found as Ra to Re.

8. Endnotes

You can download the dataset from the DoRF homepage[e].