Occlusion-aware model adaptation

You have now seen all the components of our semantic Morphable Model. We now put them together and create a script to do the full inference for a novel target image. As a starting point, we use the fitscript which implements the full model adaptation as proposed in Schönborn 2017. The parts of the scripts are commented and we will explain all the changes we do for our semantic approach. We implement the simplest instance of a semantic Morphable Model using only a face and a non-face model. This leads to occlusion-aware face model adaptation.

First of all, we need to import a bunch of classes from the scalismo-faces framework and the initial setting of scalismo:


import java.io.File

import scalismo.faces.color.{RGB, RGBA}
import scalismo.faces.deluminate.SphericalHarmonicsOptimizer
import scalismo.faces.image.{AccessMode, LabeledPixelImage, PixelImage, PixelImageOperations}
import scalismo.faces.io.{PixelImageIO, TLMSLandmarksIO}
import scalismo.faces.parameters.{ParametricRenderer, RenderParameter, SphericalHarmonicsLight}
import scalismo.faces.sampling.face.evaluators.PixelEvaluators._
import scalismo.faces.sampling.face.evaluators.PointEvaluators.IsotropicGaussianPointEvaluator
import scalismo.faces.sampling.face.evaluators.PriorEvaluators.{GaussianShapePrior, GaussianTexturePrior}
import scalismo.faces.sampling.face.evaluators._
import scalismo.faces.sampling.face.proposals.ImageCenteredProposal.implicits._
import scalismo.faces.sampling.face.proposals.ParameterProposals.implicits._
import scalismo.faces.sampling.face.proposals.SphericalHarmonicsLightProposals._
import scalismo.faces.sampling.face.proposals._
import scalismo.faces.sampling.face.{MoMoRenderer, ParametricLandmarksRenderer, ParametricModel}
import scalismo.geometry.{Vector, Vector3D, _1D, _2D}
import scalismo.sampling.algorithms.MetropolisHastings
import scalismo.sampling.evaluators.ProductEvaluator
import scalismo.sampling.proposals.MixtureProposal.implicits._
import scalismo.sampling.proposals.{MetropolisFilterProposal, MixtureProposal}
import scalismo.sampling._
import scalismo.utils.Random
import scalismo.faces.sampling.face.proposals.SphericalHarmonicsLightProposals.{RobustSHLightSolverProposal, RobustSHLightSolverProposalWithLabel}
import scalismo.faces.segmentation.LoopyBPSegmentation
import scalismo.faces.segmentation.LoopyBPSegmentation.{BinaryLabelDistribution, Label, LabelDistribution}
import scalismo.faces.gui._
import scalismo.faces.gui.GUIBlock._
import scalismo.faces.io.MoMoIO
import scalismo.faces.sampling.face.ParametricImageRenderer
import scalismo.faces.sampling.face.loggers.PrintLogger
import faces.sampling.face.proposals.SegmentationMasterProposal

scalismo.initialize()
val seed = 1986L
implicit val rnd = Random(seed)

So we start to load the face model and an image we would like to analyze and its landmarks:


val modelface12 = MoMoIO.read(new File("data/model2017-1_face12_nomouth.h5")).get
val rendererFace12 =  MoMoRenderer(modelface12, RGBA.BlackTransparent).cached(5)

val targetFn = "data/fit.png"
val lmFn = "data/fit.tlms"

val target = PixelImageIO.read[RGBA](new File(targetFn)).get
val targetLM = TLMSLandmarksIO.read2D(new File(lmFn)).get.filter(lm => lm.visible)
ImagePanel(target).displayIn("Target Image")
  

Then we start with the parts we use from the probabilistic fitting framework. We need proposals for parameter updates and we choose the same proposals as proposed in Schönborn 2017:

  /* Collection of all pose related proposals */
  def defaultPoseProposal(lmRenderer: ParametricLandmarksRenderer)(implicit rnd: Random):
  ProposalGenerator[RenderParameter] with TransitionProbability[RenderParameter] = {
    import MixtureProposal.implicits._

    val yawProposalC = GaussianRotationProposal(Vector3D.unitY, 0.75f)
    val yawProposalI = GaussianRotationProposal(Vector3D.unitY, 0.10f)
    val yawProposalF = GaussianRotationProposal(Vector3D.unitY, 0.01f)
    val rotationYaw = MixtureProposal(0.1 *: yawProposalC + 0.4 *: yawProposalI + 0.5 *: yawProposalF)

    val pitchProposalC = GaussianRotationProposal(Vector3D.unitX, 0.75f)
    val pitchProposalI = GaussianRotationProposal(Vector3D.unitX, 0.10f)
    val pitchProposalF = GaussianRotationProposal(Vector3D.unitX, 0.01f)
    val rotationPitch = MixtureProposal(0.1 *: pitchProposalC + 0.4 *: pitchProposalI + 0.5 *: pitchProposalF)

    val rollProposalC = GaussianRotationProposal(Vector3D.unitZ, 0.75f)
    val rollProposalI = GaussianRotationProposal(Vector3D.unitZ, 0.10f)
    val rollProposalF = GaussianRotationProposal(Vector3D.unitZ, 0.01f)
    val rotationRoll = MixtureProposal(0.1 *: rollProposalC + 0.4 *: rollProposalI + 0.5 *: rollProposalF)

    val rotationProposal = MixtureProposal(0.5 *: rotationYaw + 0.3 *: rotationPitch + 0.2 *: rotationRoll).toParameterProposal

    val translationC = GaussianTranslationProposal(Vector(300f, 300f)).toParameterProposal
    val translationF = GaussianTranslationProposal(Vector(50f, 50f)).toParameterProposal
    val translationHF = GaussianTranslationProposal(Vector(10f, 10f)).toParameterProposal
    val translationProposal = MixtureProposal(0.2 *: translationC + 0.2 *: translationF + 0.6 *: translationHF)

    val distanceProposalC = GaussianDistanceProposal(500f, compensateScaling = true).toParameterProposal
    val distanceProposalF = GaussianDistanceProposal(50f, compensateScaling = true).toParameterProposal
    val distanceProposalHF = GaussianDistanceProposal(5f, compensateScaling = true).toParameterProposal
    val distanceProposal = MixtureProposal(0.2 *: distanceProposalC + 0.6 *: distanceProposalF + 0.2 *: distanceProposalHF)

    val scalingProposalC = GaussianScalingProposal(0.15f).toParameterProposal
    val scalingProposalF = GaussianScalingProposal(0.05f).toParameterProposal
    val scalingProposalHF = GaussianScalingProposal(0.01f).toParameterProposal
    val scalingProposal = MixtureProposal(0.2 *: scalingProposalC + 0.6 *: scalingProposalF + 0.2 *: scalingProposalHF)

    val poseMovingNoTransProposal = MixtureProposal(rotationProposal + distanceProposal + scalingProposal)
    val centerREyeProposal = poseMovingNoTransProposal.centeredAt("right.eye.corner_outer", lmRenderer).get
    val centerLEyeProposal = poseMovingNoTransProposal.centeredAt("left.eye.corner_outer", lmRenderer).get
    val centerRLipsProposal = poseMovingNoTransProposal.centeredAt("right.lips.corner", lmRenderer).get
    val centerLLipsProposal = poseMovingNoTransProposal.centeredAt("left.lips.corner", lmRenderer).get

    MixtureProposal(centerREyeProposal + centerLEyeProposal + centerRLipsProposal + centerLLipsProposal + 0.2 *: translationProposal)
  }


 /* Collection of all statistical model (shape, texture) related proposals */
  def neutralMorphableModelProposal(implicit rnd: Random):
  ProposalGenerator[RenderParameter] with TransitionProbability[RenderParameter] = {

    val shapeC = GaussianMoMoShapeProposal(0.2f)
    val shapeF = GaussianMoMoShapeProposal(0.1f)
    val shapeHF = GaussianMoMoShapeProposal(0.025f)
    val shapeScaleProposal = GaussianMoMoShapeCaricatureProposal(0.2f)
    val shapeProposal = MixtureProposal(0.1f *: shapeC + 0.5f *: shapeF + 0.2f *: shapeHF + 0.2f *: shapeScaleProposal).toParameterProposal

    val textureC = GaussianMoMoColorProposal(0.2f)
    val textureF = GaussianMoMoColorProposal(0.1f)
    val textureHF = GaussianMoMoColorProposal(0.025f)
    val textureScale = GaussianMoMoColorCaricatureProposal(0.2f)
    val textureProposal = MixtureProposal(0.1f *: textureC + 0.5f *: textureF + 0.2 *: textureHF + 0.2f *: textureScale).toParameterProposal

    MixtureProposal(shapeProposal + textureProposal )
  }

  /* Collection of all statistical model (shape, texture, expression) related proposals */
  def defaultMorphableModelProposal(implicit rnd: Random):
  ProposalGenerator[RenderParameter] with TransitionProbability[RenderParameter] = {


    val expressionC = GaussianMoMoExpressionProposal(0.2f)
    val expressionF = GaussianMoMoExpressionProposal(0.1f)
    val expressionHF = GaussianMoMoExpressionProposal(0.025f)
    val expressionScaleProposal = GaussianMoMoExpressionCaricatureProposal(0.2f)
    val expressionProposal = MixtureProposal(0.1f *: expressionC + 0.5f *: expressionF + 0.2f *: expressionHF + 0.2f *: expressionScaleProposal).toParameterProposal


    MixtureProposal(neutralMorphableModelProposal + expressionProposal)
  }

  /* Collection of all color transform proposals */
  def defaultColorProposal(implicit rnd: Random):
  ProposalGenerator[RenderParameter] with TransitionProbability[RenderParameter] = {
    val colorC = GaussianColorProposal(RGB(0.01f, 0.01f, 0.01f), 0.01f, RGB(1e-4f, 1e-4f, 1e-4f))
    val colorF = GaussianColorProposal(RGB(0.001f, 0.001f, 0.001f), 0.01f, RGB(1e-4f, 1e-4f, 1e-4f))
    val colorHF = GaussianColorProposal(RGB(0.0005f, 0.0005f, 0.0005f), 0.01f, RGB(1e-4f, 1e-4f, 1e-4f))

    MixtureProposal(0.2f *: colorC + 0.6f *: colorF + 0.2f *: colorHF).toParameterProposal
  }

    /* Collection of all illumination related proposals */
  def illuminationProposal(modelRenderer: ParametricImageRenderer[RGBA] with ParametricModel, target: PixelImage[RGBA])(implicit rnd: Random):
  ProposalGenerator[RenderParameter] with TransitionProbability[RenderParameter] = {

    val lightSHPert = SHLightPerturbationProposal(0.001f, fixIntensity = true)
    val lightSHIntensity = SHLightIntensityProposal(0.1f)

    val lightSHBandMixter = SHLightBandEnergyMixer(0.1f)
    val lightSHSpatial = SHLightSpatialPerturbation(0.05f)
    val lightSHColor = SHLightColorProposal(0.01f)

    MixtureProposal(lightSHSpatial + lightSHBandMixter + lightSHIntensity + lightSHPert + lightSHColor).toParameterProposal
  }
  

The only small thing we changed in those proposals is the illuminationProposal, we removed the non-robust illumination estimation.

Let us put those proposals together to our proposal distribution. As discussed in the probabilistic fitting tutorial we filter our proposals by their likelihood according to the landmarks and a prior:

  // pose proposal
  val totalPose = defaultPoseProposal(rendererFace12)

  //light proposals
  val lightProposal = illuminationProposal(rendererFace12, target)

  //color proposals
  val colorProposal = defaultColorProposal

  //Morphable Model  proposals
  val expression = true
  val momoProposal = if(expression) defaultMorphableModelProposal else neutralMorphableModelProposal

  // Landmarks Evaluator
  val pointEval = IsotropicGaussianPointEvaluator[_2D](4.0) //lm click uncertainty in pixel! -> should be related to image/face size
  val landmarksEval = LandmarkPointEvaluator(targetLM, pointEval, rendererFace12)

  // Prior Evaluator
  val priorEval = ProductEvaluator(GaussianShapePrior(0, 1), GaussianTexturePrior(0, 1))


  // full proposal filtered by the landmark and prior Evaluator
  val proposal = MetropolisFilterProposal(MetropolisFilterProposal(MixtureProposal(totalPose + colorProposal + 3f*:momoProposal+ 2f *: lightProposal), landmarksEval), priorEval)

We now have a proposal distribution for the update of the parameters \(\theta\) during model adaptation. For our Metropolis Hastings algorithm we also need an evaluator for the full proposal. So far we were very close to the original fitscript. However we now want to use our likelihood we proposed:

\[ \ell (\theta ; \tilde{I}, z ) = \prod_{i} \prod_{k} \ell_{k} ( \theta ; \tilde{I}_i )^{z_{ik}} \]

To evaluate a rendered image according to this likelihood we need to know the label \(z\) for each pixel. This has to be reflected in the evaluator. The evaluator using our likelihoods \(\ell_{\text{face}}\) and \(\ell'_{\text{non-face}}\) as proposed in Chapter 3 and 4 are implemented in the LabeledIndependentPixelEvaluator. We replace the IndependentPixelEvaluator with this one as image evaluator:

val sdev = 0.043f
val faceEval = IsotropicGaussianPixelEvaluator(sdev)
val nonFaceEval = HistogramRGB.fromImageRGBA(target, 25)
val imgEval = LabeledIndependentPixelEvaluator(target, faceEval, nonFaceEval)
val labeledModelEval = LabeledImageRendererEvaluator(rendererFace12, imgEval)

The LabeledIndependentPixelEvaluator is a DistributionEvaluator for a LabeledPixelImage[RGBA] - whilst the IndependentPixelEvaluator was designed for a PixelImage[RGBA]. This takes into account that the evaluator also needs the \(z\) label. The LabeledPixelImage is just a PixelImage together with a PixelImage[Int] as labels.

The label \(z\) is now encoded as a PixelImage[Int]. To be able to use it during evaluation it has to be part of the Markov chain. Whilst the Markov chain before only held \(\theta\) which are our RenderParameter it now also holds \(z\). To implement this, we have to wrap our proposals to also include the label and we have to adapt the Markov chain accordingly:


  // a dummy segmentation proposal
  class SegmentationProposal(implicit rnd: Random) extends ProposalGenerator[(RenderParameter, PixelImage[Int])]  with SymmetricTransitionRatio[(RenderParameter, PixelImage[Int])] {
    override def propose(current: (RenderParameter, PixelImage[Int])): (RenderParameter, PixelImage[Int]) = {current}
  }


  // a joint proposal for $\theta$ and $z$ (in this implementation the segmentation proposal is never chosen)
  val masterProposal = SegmentationMasterProposal(proposal, new SegmentationProposal, 1)
  val printLogger = PrintLogger[RenderParameter](Console.out, "").verbose
  val imageFitter = MetropolisHastings(masterProposal, labeledModelEval)

We start our inference with an initialization of the pose based on the landmarks - again the same way as you already know from the probabilistic fitting tutorial:


val poseFitter = MetropolisHastings(totalPose, landmarksEval)

//landmark chain for initialisation
val initDefault: RenderParameter = RenderParameter.defaultSquare.fitToImageSize(target.width, target.height)
val init50 = initDefault.withMoMo(initDefault.momo.withNumberOfCoefficients(50, 50, 5))
val initLMSamples: IndexedSeq[RenderParameter] = poseFitter.iterator(init50).take(5000).toIndexedSeq

val lmScores = initLMSamples.map(rps => (landmarksEval.logValue(rps), rps))

val bestLM = lmScores.maxBy(_._1)._2
val imgLM = rendererFace12.renderImage(bestLM)
ImagePanel(imgLM).displayIn("Pose Initialization")

As a next step we perform our robust illumination estimation to generate an initial set of the illumination parameters in \(\theta\) and an initial segmentation label \(z\) (this will take around 30 seconds):

  val shOpt = SphericalHarmonicsOptimizer(rendererFace12, target)
  val robustShOptimizerProposal = RobustSHLightSolverProposalWithLabel(rendererFace12, shOpt, target, iterations = 100)
  val dummyImg = target.map(_ => 0)
  val robust = robustShOptimizerProposal.propose(bestLM, dummyImg)

  val robustImg = rendererFace12.renderImage(robust._1)
  val consensusSet = robust._2.map(RGB(_))

  shelf(
    ImagePanel(robustImg),
    ImagePanel(consensusSet)
  ).displayIn("Robust Illumination Estimation")

Now we have an initialization and can start our EM-like inference of the segmentation label as well as the model parameters. As a first step we draw 1000 samples from our image fitter, this may take 2-5 minutes:

  val labeledPrintLogger = PrintLogger[(RenderParameter, PixelImage[Int])](Console.out, "").verbose
  var first1000 = imageFitter.iterator(robust, labeledPrintLogger).take(1000).toIndexedSeq.last
  val first1000Img = rendererFace12.renderImage(first1000._1)
  shelf(
    ImagePanel(first1000Img)
  ).displayIn("After first 1000 samples")

Now we would like to do a first round of segmentation. First we calclulate the likelihood for the non-face model \[\ell_{\text{non-face}} (\theta ; \tilde{I}_i) = h_{\tilde{I}}(\tilde{I}_{i}).\]

Let's build the histogram on the complete image and end up with the pixelwise likelihood for the non-face region:

val nonfaceHist = HistogramRGB.fromImageRGBA(target, 25, 0)
val nonfaceProb: PixelImage[Double] = target.map(p => nonfaceHist.logValue(p.toRGB))

Let's now evaluate the face likelihood: \[ \ell'_{\text{face}} (\theta; \tilde{I}_i) = \begin{cases} \frac{1}{N} \exp \bigg(- \frac{1}{2 \sigma^2} \min_{j \in n(i)} \Big\lVert \tilde{I}_i - I_{i,n}(\theta) \Big\rVert ^2 \bigg). & \text{if $i \in \mathcal{F}$}\\ \frac{1}{\delta} h_f(\tilde{I}_i, \theta) & \text{if $i \in \mathcal{B}$}. \end{cases} \]

We first define the histogram for the pixels in \(\mathcal{B}\) which is built on the face region only (in \(\mathcal{F}\) and \(z=1\) which means it is labeled as face). We mask the target according to the current \(z\) label and the current set of face model parameters \(\theta\) and build the histogram on the masked image:

val maskedTarget = PixelImage(first1000Img.domain, (x, y) => RGBA(target(x, y).toRGB, first1000Img(x, y).a * first1000._2(x, y).toFloat))
val fgHist = HistogramRGB.fromImageRGBA(maskedTarget, 25, 0) // replace by hist defined on foreground

The likelihood for the pixels in \(\mathcal{F}\) is a little bit more complicated, remember that we include a patch of the \(9\times9\) neighboring pixels:

val sdev = 0.043f
val pixEvalHSV: IsotropicGaussianPixelEvaluatorHSV = IsotropicGaussianPixelEvaluatorHSV(sdev)
val neighboorhood = 4
var x: Int = 0
val fgProbBuffer = PixelImage(nonfaceProb.domain, (x, y) => 0.0).toBuffer

val first1000ImgR = first1000Img.withAccessMode(AccessMode.Repeat())
while (x < target.width) {
    var y: Int = 0
    while (y < target.height) {
    if (first1000Img(x, y).a > 0) {
        var maxNeigboorhood = Double.NegativeInfinity
        var q: Int = -neighboorhood
        while (q <= neighboorhood) {
        // Pixels in Face Region F
        var p: Int = -neighboorhood
        while (p <= neighboorhood) {
            val t1 = pixEvalHSV.logValue(target(x, y).toRGB, first1000ImgR(x + p, y + q).toRGB)
            maxNeigboorhood = Math.max(t1, maxNeigboorhood)
            p += 1
        }
        q += 1
        }
        fgProbBuffer(x, y) = maxNeigboorhood
    }
    else {
        // Pixels in Nonface Region B
        fgProbBuffer(x, y) = fgHist.logValue(target(x, y).toRGB)
    }
    y += 1
    }
    x += 1
}
val fgProb: PixelImage[Double] = fgProbBuffer.toImage

The likelihoods in our framework are always log-likelihoods, so let's convert them for our segmentation:

val imageGivenFace = fgProb.map(p => math.exp(p))
val imageGivenNonFace = nonfaceProb.map(p => math.exp(p))

Last but not least we need a smoothness for the segmentation - basically defining how likely neighboring pixels have the same label:

val numLabels = 2
def binDist(pEqual: Double, numLabels: Int, width: Int, height: Int): BinaryLabelDistribution = {
    val pElse = (1 - pEqual) / (numLabels - 1)
    def binDist(k: Label, l: Label) = if (k == l) pEqual else pElse
    PixelImage.view(width, height, (x, y) => binDist)
}
val smoothnessDistribution = binDist(0.9, numLabels, target.width, target.height)

Since our segmentation takes an initialization we provide an easy one - we just initialize every pixel labeled as face. It is however not sensitive since we are using fixed likelihoods:

val init = PixelImage(target.width, target.height, (x, y) => Label(0))

So let's start the segmentation algorithm and look at the result:

LoopyBPSegmentation.segmentImageFromProb(target.map(_.toRGB), init, imageGivenNonFace, imageGivenFace, smoothnessDistribution, numLabels, 5, true)

Since we will need multiple segmentation steps during the inference process, let us put it into a function for convenience:

  def segmentLBP(target: PixelImage[RGBA],
                 current: (RenderParameter, PixelImage[Int]),
                 renderer: MoMoRenderer): (PixelImage[LabelDistribution]) = {


    val curSample: PixelImage[RGBA] = renderer.renderImage(current._1)

    val nonfaceHist = HistogramRGB.fromImageRGBA(target, 25, 0)
    val nonfaceProb: PixelImage[Double] = target.map(p => nonfaceHist.logValue(p.toRGB))

    val maskedTarget = PixelImage(curSample.domain, (x, y) => RGBA(target(x, y).toRGB, curSample(x, y).a * current._2(x, y).toFloat))
    val fgHist = HistogramRGB.fromImageRGBA(maskedTarget, 25, 0) // replace by hist defined on foreground

    val sdev = 0.043f
    val pixEvalHSV: IsotropicGaussianPixelEvaluatorHSV = IsotropicGaussianPixelEvaluatorHSV(sdev)
    val neighboorhood = 4
    var x: Int = 0
    val fgProbBuffer = PixelImage(nonfaceProb.domain, (x, y) => 0.0).toBuffer

    val curSampleR = curSample.withAccessMode(AccessMode.Repeat())
    while (x < target.width) {
      var y: Int = 0
      while (y < target.height) {
        if (curSample(x, y).a > 0) {
          var maxNeigboorhood = Double.NegativeInfinity
          var q: Int = -neighboorhood
          while (q <= neighboorhood) {
            // Pixels in Face Region F
            var p: Int = -neighboorhood
            while (p <= neighboorhood) {
              val t1 = pixEvalHSV.logValue(target(x, y).toRGB, curSampleR(x + p, y + q).toRGB)
              maxNeigboorhood = Math.max(t1, maxNeigboorhood)
              p += 1
            }
            q += 1
          }
          fgProbBuffer(x, y) = maxNeigboorhood
        }
        else {
          // Pixels in Nonface Region B
          fgProbBuffer(x, y) = fgHist.logValue(target(x, y).toRGB)
        }
        y += 1
      }
      x += 1
    }
    val fgProb: PixelImage[Double] = fgProbBuffer.toImage

    val imageGivenFace = fgProb.map(p => math.exp(p))
    val imageGivenNonFace = nonfaceProb.map(p => math.exp(p))

    val numLabels = 2
    def binDist(pEqual: Double, numLabels: Int, width: Int, height: Int): BinaryLabelDistribution = {
      val pElse = (1 - pEqual) / (numLabels - 1)
      def binDist(k: Label, l: Label) = if (k == l) pEqual else pElse
      PixelImage.view(width, height, (x, y) => binDist)
    }
    val smoothnessDistribution = binDist(0.9, numLabels, target.width, target.height)

    val init = PixelImage(target.width, target.height, (x, y) => Label(0))

     LoopyBPSegmentation.segmentImageFromProb(target.map(_.toRGB), init, imageGivenNonFace, imageGivenFace, smoothnessDistribution, numLabels, 5, false)
  }

So we now have everything together, the Markov chain for parameter adaptation as well as the segmentation, so let's start with the alternating and iterating EM-algorithm.

Basically, we now perform some sampling steps (1000) to update \(\theta\) with a fixed segmentation label \(z\) and vice-versa do a segmentation step to estimate \(z\) based on the updated model parameters \(\theta\). The implementation is a simple loop with 5 repetitions - this leads to 6'000 samples (5'000 during EM plus 1'000 samples from the beginning) in total and 5 segmentation steps. This code execution will take a long time, the steps in between are visualized (we add a visualization after 1000 samples for \(\theta\) and a segmentation step) - this is the best time to get yourself some fresh air and an ice cream:

  var current = first1000
  var i = 0

  while (i < 5) {

    // segment
    val zLabel = segmentLBP(target, current, rendererFace12)
    
    // update state
     current = (current._1, zLabel.map(_.maxLabel))
    
    // fit and update state
    current = imageFitter.iterator(current, labeledPrintLogger).take(1000).toIndexedSeq.last
    
    // visualize fit and label
    val zLabelImg = zLabel.map (l => RGB(l(Label(1))))
    val thetaImg = rendererFace12.renderImage(current._1)
    shelf(
      ImagePanel(zLabelImg),
      ImagePanel(thetaImg)
    ).displayIn("After Iteration: " + i)
    i += 1
  }

That's it - if you want to see some more results you can have a look at Egger 2016, 2017. In the next and last chapter, we will share some ideas about future extensions of this framework.


Semantic Morphable Models Tutorial | Occlusion-aware model adaptation | next