You will implement Photon Mapping in this assignment, a bidirectional density estimation method for computing biased (but consistent) solutions to the rendering equation. By the end of the assignment, you will be able to render radiometrically difficult lighting effects resulting from complex light sub-paths, like caustics, much more efficiently than with the unbiased Monte Carlo solutions you've implemented so far.
Photon Mapping is a two-pass algorithm where photons are first emitted from light sources, scattered and deposited on surfaces in the scene; afterwards, the radiance is estimated by either approximating the local density of photons around a shading point or by using photons in the scene to approximate the incident radiance distribution about the shading point.
Recall the hemispherical form of the rendering equation discussed in class: \begin{equation} L(\mathbf{x}, \omega) = \int_{\mathcal{H}^2} f_r(\mathbf{x}, \omega', \omega) L(r(\mathbf{x}, \omega'), -\omega') \cos\theta' \, \mathrm{d}\omega', \label{path} \end{equation}
where $r(\mathbf{x}, \omega')$ is the ray tracing function that returns the nearest intersection point for a ray traced from the point $\mathbf{x}$ and in direction $\omega'$. We'll see that, in Photon Mapping, we can estimate this quantity by weighting the contribution of some number of photons $n$ (within an area $\Delta A$ around the shading point $\mathbf{x}$) by the BRDF $f_r$ as:
\begin{equation} L(\mathbf{x}, \omega) \approx \sum_{p=1}^{n} f_r(\mathbf{x}, \omega_p, \omega) \frac{\Delta \phi_p(\mathbf{x}, \omega_p)}{\Delta A} \end{equation}
For reference, the user-tweakable parameters in our photon mapper are defined according to the XML properties below:
<integrator type="photon-map">
<int name="photonCount" value="50000"/>
<float name="radius2" value="0.0008"/>
<int name="samplesFG" value="100"/>
<int name="samplesDI" value="50"/>
</integrator>
The meanings of these parameters, if not immediately obvious, will become so below.
An API template for the Photon Mapper is provided in src/integrators/pm.cpp
. You will generate a photon map by completing the implementation of PhotonMappingIntegrator::generatePhotonMap()
in order to emit, scatter and store photons from light sources into the scene. Note that src/integrators/pm.cpp
already contains a Photon
structure and an array of Photon
where you can store your photons.
struct Photon {
Point3f x; //Position of photon
Vector3f w; //Direction of photon
Color3f phi; //Power of the photon
Normal3f n; //Surface normal of the deposited photon
};
uniform spherical area sampling
to sample photon locations, and uniform cosine hemisphere sampling
to sample photon emission directions. Pay careful attention, as usual, to generate these quantities in the correct coordinate frame (i.e., local- vs. global/world-frame).Color3f::getLuminance()
to map between RGB and float values when performing a single-channel RR decision.PhotonMappingIntegrator::Li()
to visualize the photon map you generate, for debugging purposes; being sure that your photons are being stored at the right places (and with the right values) is important.Implement a simple radiance estimate in PhotonMappingIntegrator::Li()
using a disk lookup that's co-planar with the shading point, and with a fixed lookup radius m_radius2
, and a constant density estimation kernel, as:
\begin{equation} L(\mathbf{x}, \omega) \approx \sum_{p=1}^{n} f_r(\mathbf{x}, \omega_p, \omega) \frac{\Delta \phi_p(\mathbf{x}, \omega_p)}{\Delta A} \approx \sum_{p=1}^{k} f_r(\mathbf{x}, \omega_p, \omega) \frac{\phi_p}{\pi r^2} \end{equation}
where $k$ is the number of photons inside the disk lookup region of radius $r$. As usual $f_r$ is the (view-evaluated) BRDF.
You will use a KD-Tree we provide for you in src/core/kdtree.h
in order to find the $k$ photons that lie within the fixed search radius. You will need to understand how to use the functions KDTree::build()
, KDTree::push_back()
and KDTree::nnSearch().
Read the comments in src/core/kdtree.h
and be careful to correctly handle all possible return values of the function KDTree::nnSearch()
.
KDTree::nnSearch()
will yield photons withing a spherical volume around your lookup point, however we need to only consider those photons that lie on the same surface as the shading point: make sure that photon normals ($N_p$) are incident to the surface normal ($N_p \cdot N > 0$) and make sure that photons are in front of the surface and not behind it ($N_p \cdot \omega_i > 0$ and $N \cdot \omega_i > 0$). Of course, as with any such floating point test, you should consider adding some meaningful epsilon value in order to avoid false negatives. Note that the KD-tree can also be used to perform an adaptive radius lookup (where, instead of specifying the lookup radius, you specify the number of photons you want to find), however this assignment only requires you implement the basic fixed-radius density estimate. Adaptive radius density estimation is reserved as a hacker point task (see the course website).
Once you have implemented both the photon map building and density estimation passes, render the scene scenes/hw5/Tests/cbox-pm.xml
and compare the result with your path tracer. Both renders should be nearly identical with about 50M photons and a squared fixed radius lookup of 0.0008.
Now that you have a basic photon mapper implemented, we'll move towards a more efficient second pass. Modify your integrator to decouple the computation of direct and indirect illumination, as follows: only evaluate indirect illumination using the photon map, and evaluate direct illumination using your old Direct Integrator
.
In Task 2 you used direct density estimation (i.e., at the shading point) to compute the indirect illumination contribution. Alternatively, you can use a final gathering (FG) pass to improve the convergence/quality of the indirect illumination. Final gathering operates similarly to the Direct Integrator
, however now when computing the outgoing (indirect) radiance you will use a density estimate with the photons in your modified photon map to estimate the incident illumination distribution at your shading point.
Direct Integrator
to compute the direct illumination: since you're integrating these "direct photons" one bounce away from your shading point during the final gather, they contribute to the 1-bounce indirect illumination now (similarly to virtual point lighting). Final gathering should significantly improve the visual convergence of your photon mapping integrator.
At about equal render time, and in the scenes we provide you with, your photon mapper should generate a much more visually pleasing image when compared to path tracing. Do recall, however, that the photon mapping algorithms you implement above are all biased estimator and, as such, they will not converge to exactly the same image as a converged path tracing rendering.
Finished? Render the scenes contained in scenes/hw5/Final
and use the given script to submit your files (as you did for the previous assignments). Include any specific comments in the appropriate section of the script.