Illumination Estimation

In this chapter we discuss how to estimate the environment map of the self-shadowed object given surface normals, albedo and radiance. We solve a system of linear equations and solve it for the environment map. Particularly, we describe how to include self-shadowing into the esimation process. As a result, the estimated illumination respects self-shadowing, rather than including self-shadowing effects into the environment map.

In the next chapter, we will integrate the solver into the analysis-by-synthesis 3d face shape, texture, and illumination reconstruction process.

Figure 1: Reconstructed environment map respecting self-shadowing.

Figure 1: Reconstructed environment map respecting self-shadowing.

Mathematical Background

Estimating an environment map in a spherical harmonics representation turns out to be rather straight forward. In fact it can be derived from the SH-based rendering equation:

\(L_i = b_{n_i}^T l\)

Here, \(L_i\) is the radiance at a surface point \(i\) and \(n_i\) its surface normal. The term \(b_{n_i}^T\) represents the (lambertian) BRDF in the normal direction \(n_i\). The scalar product between the incident illumination \(l\) models the surface-light interaction.

Given a target image of an object, e.g. a face, and an estimated shape for the object, we have a multitude of well-defined correspondences between radiance-values \(L_i\) and surface normals \(n_i\). With \(L = [L_0, L_1, L_2, \dots]\) and \(A = [b_{n_0}, b_{n_1}, b_{n_2}, \dots]^T\) and the above rendering equation we have

\(L = A l\)

for which we can obtain the environmental illumination \(l\) as a least-squares solution.

This approach can be extended to account for self-shadowing merely by multiplying the BRDF term with the transfer matrix for the corresponding point. The matrix A changes to \(A = [T_0 b_{n_0}, T_1 b_{n_1}, T_2 b_{n_2}, \dots]^T\). Again, we solve for \(l\) using a least-squares approach.

Implementation Example

In this example, we will examine the difference between illumination estimation with and without taking self-occlusion into account. We will render a test image with a known environment map and proceed to reconstruct it with both techniques.

As usual, we begin by initializing scalismo, loading the morphable model and setting up an appropriate PRT technique. These are combined to instantiate a PRT-based MoMo renderer.

scalismo.initialize() // Don't forget to initialize scalismo somewhere
val momo = readMoMo("data/supplied/model2017-1_bfm_nomouth_l6.h5")


implicit val rnd: Random = Random(1024L)

val technique = LambertTechnique
  .withDecorator(PersistentCaching(_)).withCachingPath("tmp/prt-cache/illumination")
  .withDecorator(ProfiledTechnique(_))

val parameterModel = LambertParameterModel(technique)
  .withParameterModifier(_
    .withOcclusionRaycastSamples(900)
    .withLightBounces(0)
    .withShBands(3)
  )

val momoRenderer =
  RenderingModel(technique)
    .withParameterModel(parameterModel)
    .withTransferModelTemplate(ExactTransferModel.apply)
    .getParametricMoMoRenderer(momo).withClearColor(RGBA.WhiteTransparent)

Next, we load our environment map. For the sake of the example we use an environment map which mimics three bright, directional, point-like lights.

val trueIllumination = readEnvironmentMap("data/env03.jpg")
  .mapRGB(_ (0 until 9))
  .mapRGB(SHRotation(0, 120, 0).apply)
  .mapRGB(_ * 20.0) // env03.jpg is rather dark, so we brighten it
//    .mapRGB(_ => SH(Vector3D(0, -1, 0).normalize, 3) * 1.2)

We continue by rendering our target image.

val momoInstance = MoMoInstance.sample(momo, null)

val renderParameter = RenderParameter.defaultSquare
  .withMoMo(momoInstance)
  .withEnvironmentMap(trueIllumination)
  .withPose(Pose.away1m //
    .withYaw(-15.toRadians)
    .withPitch(-25.toRadians))

val targetImage = momoRenderer.renderImage(renderParameter)

Now we try to reconstruct the original environment map from the rendering by solving the two linear equation systems. Once with the one that respects self-shadowing and the other that assumes only a local illumination model.

import scalismo.faces.deluminate.SphericalHarmonicsOptimizer
import faces.paper.fit.deluminate.SelfOcclusionSphericalHarmonicsOptimizer

val surfaceSampling = MeshSurfaceSampling.sampleUniformlyOnSurface(10000) _

val shOptimizerWithoutShadows = new SphericalHarmonicsOptimizer(momoRenderer, targetImage)
val shOptimizerWithShadows = new SelfOcclusionSphericalHarmonicsOptimizer(momoRenderer, targetImage)

val estimatedIlluminationWithShadows = shOptimizerWithShadows.optimize(renderParameter, surfaceSampling).coefficients
val estimatedIlluminationWithoutShadows = shOptimizerWithoutShadows.optimize(renderParameter, surfaceSampling).coefficients

Finally, we compare the reconstructed environment maps. The keys 1, 2, and 3 cycle through the true illumination, the estimated illumination without shadows and with shadows, respectively.

val panel = new InteractiveRenderPanel() {
  import KeyEvent._
  import faces.render.prt.render.utils.CubeProjectionRenderer

  override def render(renderParameters: RenderParameter): PixelImage[RGBA] = {
    val illumination =
      input.getToggleGroup(VK_1, VK_2, VK_3) match {
        case Some(VK_1) | None => trueIllumination: SphericalHarmonicsLight
        case Some(VK_2) => SphericalHarmonicsLight(estimatedIlluminationWithoutShadows)
        case Some(VK_3) => SphericalHarmonicsLight(estimatedIlluminationWithShadows)
      }

    val params = renderParameters
      .withEnvironmentMap(illumination)
      .withMoMo(momoInstance)

    val buffer = momoRenderer.renderImage(params).toBuffer

    CubeProjectionRenderer
      .inLowerRight(buffer)
      .render(illumination: RGBTuple[DenseVector[Double]])

    buffer.toImage
  }
}
val figC4 = new InteractiveFigure(title="Demo", interactiveRenderPanel = panel)

Tutorial Home | Next Chapter: Face Reconstruction with Self-Shadowing | Previous Chapter: Removing Self-Shadowing from the Texture Model