You will be responsible for implementing two Monte Carlo rendering algorithms in this assignment, an ambient occlusion integrator and a more general direct illumination integrator.
Before starting the assignment, it might be a good idea to review the course slides for these topics.
You will first implement uniform direction sampling on the unit sphere and hemisphere. The methods you implement in this task will serve as building blocks later in this, and future, assignments. For each distribution, you will have to implement two routines: one to draw a sample proportional to the distribution, and another to evaluate the value of the PDF at sample locations. The Warp
class in include/nori/warp/warp.h
consists of sampling methods that accept 2D uniform canonical random variables $(u,v) \in [0,1) \times [0,1)$ as input and return warped 2D (or 3D) points $\phi(u,v)$ in a new domain. Every such method has an associated PDF method, and you will implement three (3) such warping and PDF method pairs in src/warp/warp.cpp
. For more information about sample warping, you can consult Chapter 13 of PBRT3.
There are a total of six (6) warping methods exposed in this class and you are free to implement any of the additional three (3) during the semester for use later on your hacker points.
Implement Warp::squareToUniformSphere()
and Warp::squareToUniformSpherePdf()
. The former transforms uniform 2D canonical random numbers on the unit square into uniform points on the surface of a unit sphere (centered at the origin). The latter implements this warping function's probability density function.
There are two types of hemispherical distributions you will implement.
First, implement Warp::squareToUniformHemisphere()
and Warp::squareToUniformHemispherePdf()
to transform uniform 2D canonical random numbers on the unit square into uniformly distributed points on the surface of a unit hemisphere (again, centered at the origin). Use the $z$-axis $(0,0,1)$ as the pseudo-normal of your hemisphere. Implement this function's probability density function, too.
Next, implement Warp::squareToCosineHemisphere()
and Warp::squareToCosineHemispherePdf()
to transform 2D canonical random numbers to directions distributed on the unit hemisphere according to a cosine-weighted density. Implement this distribution's PDF evaluation method, too. You may want to implement Warp::squareToConcentricDisk()
and Warp::squareToConcentricDiskPdf()
to help you for this task, if you choose to use Nusselt's Analog.
To test your code, launch the interactive warping tool warptest
and run the $\chi^2$ hypothesis tests to compare the sampled histogram with your integrated density. It is crucial that both the warping method and its corresponding PDF are mutually consistent. Significant errors can arise if inconsistent warpings are used for Monte Carlo integration.
Note that passing the test does not necessarily guarantee that your code is bug free. Use your judgment and don't rely exclusively on this test.
Ambient occlusion (AO) is a special-case direct illumination technique that assumes only diffuse reflectance and uniform environmental illumination. Some surface positions will receive less light than others due to occlussion, and so they will appear darker. Recall that, in AO, the reflected radiance at a point $\mathbf{x}$ is given by
AmbientOcclusion
integrator in src/ao.cpp
and implement AmbientOcclusion::Li()
. Your integrator should have two class fields: the number of shading samples $N$, denoted as shadingSamples
, and the sampling strategy sampler = {uniform, cosine}
. You should be able to set these properties manually in an XML scene file as follows: <!-- Use ambient occlusion integrator -->
<integrator type="ao">
<integer name="shadingSamples" value="4"/>
<string name="sampler" value="uniform"/>
</integrator>
Start by implementing a Monte Carlo estimator using uniform hemispherical sampling. Note that this requires using your warping function Warp::squareToUniformHemisphere()
from Task 1: call the appropriate warping functions with the Warp::warp()
static method, passing the desired EWarpType
as a parameter. For this estimator, EWarpType::EUniformHemisphere
will draw a sample uniformly about the hemisphere aligned with the $z$-axis. You will have to rotate this direction to lie in the hemisphere aligned about the normal at the shading point, before querying the visibility function in the integrand using a scene-bounded shadow ray query.
Implement the second estimator, this time sampling directions on the hemisphere (aligned originally along the $z$-axis, which you must rotate appropriately) with a cosine-weighted distribution, using EWarpType::ECosineHemisphere
.
Implement the ambient occlusion ao
integrator that allows the scene file to specify sampling strategies (from the two you implemented). You can use the scene scenes/hw2/Tests/sphere-ao.xml
to test your implementation. Below is a reference image that was rendered to visual convergence, for comparison.
A sphere rendered with ambient occlusion using 1024 samples per pixel (spp) and cosine-weighted importance sampling
In the previous assignment, you implemented two types of delta lights. While these emission profiles are convenient in certain contexts, they are not physically-realizable. In the real world, emitters have form and geometry (e.g., light bulbs, neon lights, etc.) In these cases, the direct illumination equation does not reduce from an integral to an deterministic evaluation.
You will implement a spherical area light geometric encapsulation, as well as three different sampling techniques specialized this geometry. To guide you, we have provided you with a new class AreaLight
derived from Emitter
in src/emitters/area.cpp
. The emission profile of the light is expressed in terms of its radiance, which you can assume to be spherically uniform at each point on the emitter. You will notice that each Shape
objects has a pointer to an Emitter
object that allows us to attach an emitter to a shape via Shape::addChild()
. This mechanism allows us to differentiate between regular objects in the scene, and does that are emitters. This is also the mechanism we'll use to define our light sources (through the XML interface in the scene file). The Shape
class has two virtual sampling functions (sampleArea()
, sampleSolidAngle()
) and their two associated PDF evaluation fuctions (pdfArea()
and pdfSolidAngle()
). You will be responsible for implementing these functions in your Sphere
class. This abstraction is notable as it paves the way for different area lights (e.g., mesh, rectangle, etc.) that you might encounter later in the course. The methods AreaLight::sample()
and AreaLight::pdf()
may behave differently depending on the type of shape (i.e., the geometry of the shape) they're associated with. Take a look at the implementation of Shape::sample()
and Shape::pdf()
methods, and keep in mind how a methodological use of polymorphism can increase code/functionality re-use. Concretely, your task here is to implement a sphere light—a Sphere
shape attached to an AreaLight
.
Once you've implemented you light structure, you'll be able to start implementing new direct illumination integrators that use area lights: as discussed in class, we may draw samples according to directions or points. Recall the reflection equation discussed in class, which expresses the reflected radiance distribution as an integral )over the unit hemisphere centered at $\mathbf{x}$) of the BRDF, the cosine foreshortening term, and the incident radiance:
simple
integrator from the previous assignment, and implement DirectIntegrator::Li()
in src/integrators/direct.cpp
. Once again, this integrator has two properties: the Monte Carlo sample count nSamples
and a sampling measure measure = {hemisphere, area, solid-angle}
. It can also take an extra property, warp-type = {uniform-hemisphere, cosine-hemisphere}
when set at measure = hemisphere
. These options allow us to select from many possible sampling strategies and integration domains. <!-- Use direct illumination integrator -->
<integrator type="direct">
<integer name="nSamples" value="4"/>
<string name="measure" value="hemisphere"/>
<string name="warp-type" value="uniform-hemisphere"/>
</integrator>
A correct but naïve way of evaluating Equation \eqref{reflec} is to sample directions uniformly over the hemisphere. Your first estimator will implement the hemisphere
functionality, and so should support both uniform hemispherical sampling (uniform-hemisphere
) and cosine-weighted hemispherical sampling (cosine-hemisphere
) estimators. DirectIntegrator::Li()
will draw samples according to the appropriate strategy, and return an MC estimate of $L_r(\mathbf{x},\omega_r)$ which will once again rely on: sampling directions, tracing shadow rays from your shading point in order to evaluate the visibility term in these directions, evaluating the remaining terms of the integrand at these directions, and dividing by the appropriate pdf. Note that these hemispherical estimators can be extremely inefficient: light sources can subtend only tiny solid angle on the entire hemisphere, and so many samples can be wasted, causing the algorithm to produce noisy images at low (and even moderate) sample counts.
Instead of sampling directions on the hemisphere, we can alternative sample surface points directly on the light source. Conceptually, this means that we can express our integral using the surface area form of the reflection equation, integrating over the light source surfaces $\mathcal{L}$ instead of over the hemispher of incident lighting directions $\mathcal{H}^2$:
Sphere::sampleArea()
which expects a SampleQueryRecord &outSQR
as an output parameter and a point Point2f &sample
as an input parameter. Then, implement Sphere::pdfArea()
evaluates the (surface area) PDF at the sample position Point3f &sample
. Finally, implement AreaLight::eval()
that returns the outgoing radiance in direction $-w_i$ using an EmitterQueryRecord
output parameter, outERec
. This last function is necessary for evaluating the $L_e$ term in your integrand. When performing MC integration in Li()
, you can sample a point on the sphere light by calling the AreaLight::sample()
and AreaLight::pdf()
functions you implemented earlier, passing your sampled point sampleQueryRecord.sample.p
as the third parameter to AreaLight::eval()
. This surface-area sampling sampling strategy is typically preferable to uniform hemispherical sampling, but roughly half of the surface samples end up being wasted (i.e., falling on the backside of the sphere light, from the point of view of a shading point). Indeed, any sample on the emitter that is not directly visible from the shading point $\mathbf{x}$ needs to be discarded. There is a third, more efficient option for sphere light integration.
To avoid generating samples that are guaranteed to not contribute to our estimator, you will implement subtended solid angle sampling in the Sphere::sampleSolidAngle()
and Sphere::pdfSolidAngle()
. These routines use a SampleQueryRecord& outSQR
as an output variable, and expect a sample point Point2f &sample
and the shading point const Point3f& x
as input parameters. Sample direction towards the subtended spherical cap in order to populate the fields of &outSQR
. PBRT3 Section 14.2 will be useful here.
To test your algorithms, render scenes/hw2/Tests/sphere-area.xml
with hemisphere, area and solid angle sampling. Below are reference images for all three cases.
Direct illumination with uniform hemispherical sampling at 16 spp
Direct illumination with uniform area sampling at 16 spp
Direct illumination with subtended solid angle sampling at 16 spp
Finished? Render the scenes contained in scenes/hw2/Finals
and use the submission script to submit your files as you did for A0 and A1. Include any specific comments in the appropriate section of the script.