In this chapter, we remove the self-shadowing already present in the texture model. In the last chapter, we extended the Morphable Model with efficient self-shadowing by introducing the transfer model. We build it on top of the existing Morphable Model shape and texture model. As a consequence, some self-shadowing effects are present twice. Once from the transfer model and once from the Morphable Model textures themselves. They contain self-shadowing naturally even when they are well lit.
Because Morphable Models model shape and texture independently, shadows due to self-occlusion can be expressed with the texture model and with the shape model. Therefore, it can explain self-shadowing with either model. We want to explaining self-shadowing solely with the shape model. Therefore, we remove self-shadowing effects present in the texture model. This reduces the texture model's explanatory power for shadows. So we can expect that shading cues are explained with shape and illumination and not the texture. Also, we would expect more precise reconstructions.
Figure 1: Original texture
removed the effect of self-shadowing
Removed part
Having an efficient method to approximate transfer data at hand, we can now start to leverage PRT-based shadowing to delight the texture model. In this chapter we will demonstrate how self-shadowing effects can be removed from textures which can in turn be used to generate an occlusion-adjusted texture model. Please note that the transfer model per se is not required to remove shadowing from textures. This process would typically be performed in an offline pre-computation, using the exact transfer simulation.
In this chapter we will illustrate the delighting process by applying it to the texture of a mesh randomly sampled from the model. After delighting a series of face scans in the same way, it is possible to build a morphable face model that does not contain self shadowing effects in its texture model. The self-shadowing deluminated Basel Face Model can be obtained from the Basel Face Model website.
The concept behind is rather simple: Given a mesh, e.g. a scan, and its shadow-tainted texture \(I_s\) we would like to compute the occlusion-adjusted texture \(I_\bar{s}\). For that we essentially compare two renderings of the mesh under ambient illumination, one with PRT-simulated shadows \(R_s\) and one without simulated shadows \(R_\bar{s}\). By subtracting the \(R_\bar{s}\) from \(R_s\) we obtain the difference in radiance due to self-shadowing. We subtract this difference from the original texture in order to nullify the effects of self-shadowing on it. Essentially we have:
\(I_\bar{s} = I_s - (R_s - R_{\neg{s}})\)
We are usually working with textures and renderings in sRGB space. Because sRGB is a non-linear color space, addition of color values would not give correct results. To correct the textures, we first transform the color values to the linear RGB color space, apply the subtractions and then transform the result back to sRGB.
We start by initializing scalismo and drawing a random sample from our face model. Later, we will remove self-shadowing effects from the texture of the drawn face.
scalismo.initialize() // Don't forget to initialize scalismo at the beginning of the application.
val momo = readMoMo("data/supplied/model2017-1_bfm_nomouth_l6.h5")
implicit val rnd: Random = scalismo.utils.Random(1024L)
val mesh = momo.sample()
As noted in the previous section, textures are typically encoded in the sRGB color-space. Because we want to perform additive operations on them, we transform the sampled texture to a linear RGB color-space next.
import RGBsRGBConversion._
val shape = mesh.shape
val colorSRGB = mesh.color
val colorLinear = mesh.color.mapPoints(RGBsRGBConversion.toLinear)
val meshLinear = VertexColorMesh3D(shape, colorLinear)
In order to compute shadowing effects we specify an appropriate PRT technique and specify some related rendering parameters.
val technique = LambertTechnique
.withDecorator(PersistentCaching(_)).withCachingPath("tmp/prt-cache/delighting")
.withDecorator(ProfiledTechnique(_))
val prtParams = technique.getDefaultParameters(shape)
.withColoredBrdf(colorLinear) // Note the usage of linear color space
.withLightBounces(0)
.withShBands(5)
We continue by setting up two lambert shaders for ambient lighting, one of accounts for self-shadowing using PRT and the other uses a local-only illumination model.
val ambientIllumination =
RenderParameter.default
.withEnvironmentMap(SphericalHarmonicsLight.ambientWhite)
val ambientStdUnshadowed = ambientIllumination.pixelShader(meshLinear) // standard lambert
val ambientPrtShadowed = technique.getRenderer(prtParams).withClearColor(RGBA.WhiteTransparent) // PRT-shadowed lambert, with white transparent background.
.createPixelShader(ambientIllumination, PrtRenderParameter.default)
Finally, for each of the two shaders we render the radiance into two individual surface point properties. Please note that we're not rendering a regular image. Instead we conceptually render radiance onto each point on the mesh's surface.
import faces.paper.model.IlluminationCorrection.renderIntoPointProperty
val inPropertyWithShadows = renderIntoPointProperty(shape.triangulation, ambientPrtShadowed)
val inPropertyWithoutShadows = renderIntoPointProperty(shape.triangulation, ambientStdUnshadowed)
Now we apply the subtraction formula introduced in the previous section to the surface radiance properties that we have just rendered and the original texture of the mesh.
def diff(a: RGBA, b: RGBA) = (a - b).noAlpha
val shadowDifference = inPropertyWithShadows.combineWith(inPropertyWithoutShadows)(diff) // combine with allows us to apply the diff function on corresponding values in both properties.
val shadowCorrectedProperty = colorLinear.combineWith(shadowDifference)(diff).mapPoints(toSRGB)
val meshCorrected = VertexColorMesh3D(meshLinear.shape, shadowCorrectedProperty)
Finally, we can visualize the resulting shadow-adjusted texture and compare it to the original. By pressing the keys 1, 2, and 3 you can switch between the difference image, the shadow-corrected mesh and the original.
val panel = new InteractiveRenderPanel {
import KeyEvent._
import ParametricRenderer._
override def render(renderParameter: RenderParameter): PixelImage[RGBA] =
input.getToggleGroup(VK_1, VK_2, VK_3) match {
case Some(VK_1) | None => renderPropertyImage(renderParameter, shape, shadowDifference.map(_ * -1.0))
.map(_.getOrElse(RGBA.White)) // to exaggerate the difference, multiply it by -2.0 instead of -1.0.
case Some(VK_2) => renderParameterVertexColorMesh(renderParameter, meshCorrected, RGBA.WhiteTransparent)
case Some(VK_3) => renderParameterVertexColorMesh(renderParameter, mesh, RGBA.WhiteTransparent)
}
}
val figC3 = new InteractiveFigure(title= "Demo", interactiveRenderPanel = panel)
Tutorial Home | Next Chapter: Illumination Estimation |
Previous Chapter: The MoMo-PRT Model