**Ray Tracing in One Weekend**
[Peter Shirley][], [Trevor David Black][], [Steve Hollasch][]
Version 4.0.0-alpha.1, 2023-08-06
Copyright 2018-2023 Peter Shirley. All rights reserved.
Overview
====================================================================================================
I’ve taught many graphics classes over the years. Often I do them in ray tracing, because you are
forced to write all the code, but you can still get cool images with no API. I decided to adapt my
course notes into a how-to, to get you to a cool program as quickly as possible. It will not be a
full-featured ray tracer, but it does have the indirect lighting which has made ray tracing a staple
in movies. Follow these steps, and the architecture of the ray tracer you produce will be good for
extending to a more extensive ray tracer if you get excited and want to pursue that.
When somebody says “ray tracing” it could mean many things. What I am going to describe is
technically a path tracer, and a fairly general one. While the code will be pretty simple (let the
computer do the work!) I think you’ll be very happy with the images you can make.
I’ll take you through writing a ray tracer in the order I do it, along with some debugging tips. By
the end, you will have a ray tracer that produces some great images. You should be able to do this
in a weekend. If you take longer, don’t worry about it. I use C++ as the driving language, but you
don’t need to. However, I suggest you do, because it’s fast, portable, and most production movie and
video game renderers are written in C++. Note that I avoid most “modern features” of C++, but
inheritance and operator overloading are too useful for ray tracers to pass on.
> I do not provide the code online, but the code is real and I show all of it except for a few
> straightforward operators in the `vec3` class. I am a big believer in typing in code to learn it,
> but when code is available I use it, so I only practice what I preach when the code is not
> available. So don’t ask!
I have left that last part in because it is funny what a 180 I have done. Several readers ended up
with subtle errors that were helped when we compared code. So please do type in the code, but you
can find the finished source for each book in the [RayTracing project on GitHub][repo].
A note on the implementing code for these books -- our philosophy for the included code prioritizes
the following goals:
- The code should implement the concepts covered in the books.
- We use C++, but as simple as possible. Our programming style is very C-like, but we take
advantage of modern features where it makes the code easier to use or understand.
- Our coding style continues the style established from the original books as much as possible,
for continuity.
- Line length is kept to 96 characters per line, to keep lines consistent between the codebase and
code listings in the books.
The code thus provides a baseline implementation, with tons of improvements left for the reader to
enjoy. There are endless ways one can optimize and modernize the code; we prioritize the simple
solution.
We assume a little bit of familiarity with vectors (like dot product and vector addition). If you
don’t know that, do a little review. If you need that review, or to learn it for the first time,
check out the online [_Graphics Codex_][gfx-codex] by Morgan McGuire, _Fundamentals of Computer
Graphics_ by Steve Marschner and Peter Shirley, or _Fundamentals of Interactive Computer Graphics_
by J.D. Foley and Andy Van Dam.
Peter maintains a site related to this book series at https://in1weekend.blogspot.com/, which
includes further reading and links to resources.
If you want to communicate with us, feel free to send us an email at:
- Peter Shirley, ptrshrl@gmail.com
- Steve Hollasch, steve@hollasch.net
- Trevor David Black, trevordblack@trevord.black
Finally, if you run into problems with your implementation, have general questions, or would like to
share your own ideas or work, see [the GitHub Discussions forum][discussions] on the GitHub project.
Thanks to everyone who lent a hand on this project. You can find them in the acknowledgments section
at the end of this book.
Let’s get on with it!
Output an Image
====================================================================================================
The PPM Image Format
---------------------
Whenever you start a renderer, you need a way to see an image. The most straightforward way is to
write it to a file. The catch is, there are so many formats. Many of those are complex. I always
start with a plain text ppm file. Here’s a nice description from Wikipedia:
![Figure [ppm]: PPM Example](../images/fig-1.01-ppm.jpg)
Let’s make some C++ code to output such a thing:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include
int main() {
// Image
int image_width = 256;
int image_height = 256;
// Render
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = 0; j < image_height; ++j) {
for (int i = 0; i < image_width; ++i) {
auto r = double(i) / (image_width-1);
auto g = double(j) / (image_height-1);
auto b = 0;
int ir = static_cast(255.999 * r);
int ig = static_cast(255.999 * g);
int ib = static_cast(255.999 * b);
std::cout << ir << ' ' << ig << ' ' << ib << '\n';
}
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-initial]: [main.cc] Creating your first image]
There are some things to note in this code:
1. The pixels are written out in rows.
2. Every row of pixels is written out left to right.
3. These rows are written out from top to bottom.
4. By convention, each of the red/green/blue components are represented internally by real-valued
variables that range from 0.0 to 1.0. These must be scaled to integer values between 0 and 255
before we print them out.
5. Red goes from fully off (black) to fully on (bright red) from left to right, and green goes
from fully off at the top to black at the bottom. Adding red and green light together make
yellow so we should expect the bottom right corner to be yellow.
Creating an Image File
-----------------------
Because the file is written to the standard output stream, you'll need to redirect it to an image
file. Typically this is done from the command-line by using the `>` redirection operator, like so:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
build\Release\inOneWeekend.exe > image.ppm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(This example assumes that you are building with CMake, using the same approach as the
`CMakeLists.txt` file in the included source. Use whatever build environment (and language) you're
comfortable with.)
This is how things would look on Windows with CMake. On Mac or Linux, it might look like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
build/inOneWeekend > image.ppm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Opening the output file (in `ToyViewer` on my Mac, but try it in your favorite image viewer and
Google “ppm viewer” if your viewer doesn’t support it) shows this result:
![Image 1: First PPM image
](../images/img-1.01-first-ppm-image.png class='pixel')
Hooray! This is the graphics “hello world”. If your image doesn’t look like that, open the output
file in a text editor and see what it looks like. It should start something like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
P3
256 256
255
0 0 0
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
7 0 0
8 0 0
9 0 0
10 0 0
11 0 0
12 0 0
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [first-img]: First image output]
If your PPM file doesn't look like this, then double-check your formatting code.
If it _does_ look like this but fails to render, then you may have line-ending differences or
something similar that is confusing your image viewer.
To help debug this, you can find a file `test.ppm` in the `images` directory of the Github project.
This should help to ensure that your viewer can handle the PPM format and to use as a comparison
against your generated PPM file.
Some readers have reported problems viewing their generated files on Windows.
In this case, the problem is often that the PPM is written out as UTF-16, often from PowerShell.
If you run into this problem, see
[Discussion 1114](https://github.com/RayTracing/raytracing.github.io/discussions/1114)
for help with this issue.
If everything displays correctly, then you're pretty much done with system and IDE issues --
everything in the remainder of this series uses this same simple mechanism for generated rendered
images.
If you want to produce other image formats, I am a fan of `stb_image.h`, a header-only image library
available on GitHub at https://github.com/nothings/stb.
Adding a Progress Indicator
----------------------------
Before we continue, let's add a progress indicator to our output. This is a handy way to track the
progress of a long render, and also to possibly identify a run that's stalled out due to an infinite
loop or other problem.
Our program outputs the image to the standard output stream (`std::cout`), so leave that alone and
instead write to the logging output stream (`std::clog`):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
for (int j = 0; j < image_height; ++j) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
for (int i = 0; i < image_width; ++i) {
auto r = double(i) / (image_width-1);
auto g = double(j) / (image_height-1);
auto b = 0;
int ir = static_cast(255.999 * r);
int ig = static_cast(255.999 * g);
int ib = static_cast(255.999 * b);
std::cout << ir << ' ' << ig << ' ' << ib << '\n';
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
std::clog << "\rDone. \n";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-progress]: [main.cc] Main render loop with progress reporting]
Now when running, you'll see a running count of the number of scanlines remaining. Hopefully this
runs so fast that you don't even see it! Don't worry -- you'll have lots of time in the future to
watch a slowly updating progress line as we expand our ray tracer.
The vec3 Class
====================================================================================================
Almost all graphics programs have some class(es) for storing geometric vectors and colors. In many
systems these vectors are 4D (3D position plus a homogeneous coordinate for geometry, or RGB plus an
alpha transparency component for colors). For our purposes, three coordinates suffice. We’ll use the
same class `vec3` for colors, locations, directions, offsets, whatever. Some people don’t like this
because it doesn’t prevent you from doing something silly, like subtracting a position from a color.
They have a good point, but we’re going to always take the “less code” route when not obviously
wrong. In spite of this, we do declare two aliases for `vec3`: `point3` and `color`. Since these two
types are just aliases for `vec3`, you won't get warnings if you pass a `color` to a function
expecting a `point3`, and nothing is stopping you from adding a `point3` to a `color`, but it makes
the code a little bit easier to read and to understand.
We define the `vec3` class in the top half of a new `vec3.h` header file, and define a set of useful
vector utility functions in the bottom half:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef VEC3_H
#define VEC3_H
#include
#include
using std::sqrt;
class vec3 {
public:
double e[3];
vec3() : e{0,0,0} {}
vec3(double e0, double e1, double e2) : e{e0, e1, e2} {}
double x() const { return e[0]; }
double y() const { return e[1]; }
double z() const { return e[2]; }
vec3 operator-() const { return vec3(-e[0], -e[1], -e[2]); }
double operator[](int i) const { return e[i]; }
double& operator[](int i) { return e[i]; }
vec3& operator+=(const vec3 &v) {
e[0] += v.e[0];
e[1] += v.e[1];
e[2] += v.e[2];
return *this;
}
vec3& operator*=(double t) {
e[0] *= t;
e[1] *= t;
e[2] *= t;
return *this;
}
vec3& operator/=(double t) {
return *this *= 1/t;
}
double length() const {
return sqrt(length_squared());
}
double length_squared() const {
return e[0]*e[0] + e[1]*e[1] + e[2]*e[2];
}
};
// point3 is just an alias for vec3, but useful for geometric clarity in the code.
using point3 = vec3;
// Vector Utility Functions
inline std::ostream& operator<<(std::ostream &out, const vec3 &v) {
return out << v.e[0] << ' ' << v.e[1] << ' ' << v.e[2];
}
inline vec3 operator+(const vec3 &u, const vec3 &v) {
return vec3(u.e[0] + v.e[0], u.e[1] + v.e[1], u.e[2] + v.e[2]);
}
inline vec3 operator-(const vec3 &u, const vec3 &v) {
return vec3(u.e[0] - v.e[0], u.e[1] - v.e[1], u.e[2] - v.e[2]);
}
inline vec3 operator*(const vec3 &u, const vec3 &v) {
return vec3(u.e[0] * v.e[0], u.e[1] * v.e[1], u.e[2] * v.e[2]);
}
inline vec3 operator*(double t, const vec3 &v) {
return vec3(t*v.e[0], t*v.e[1], t*v.e[2]);
}
inline vec3 operator*(const vec3 &v, double t) {
return t * v;
}
inline vec3 operator/(vec3 v, double t) {
return (1/t) * v;
}
inline double dot(const vec3 &u, const vec3 &v) {
return u.e[0] * v.e[0]
+ u.e[1] * v.e[1]
+ u.e[2] * v.e[2];
}
inline vec3 cross(const vec3 &u, const vec3 &v) {
return vec3(u.e[1] * v.e[2] - u.e[2] * v.e[1],
u.e[2] * v.e[0] - u.e[0] * v.e[2],
u.e[0] * v.e[1] - u.e[1] * v.e[0]);
}
inline vec3 unit_vector(vec3 v) {
return v / v.length();
}
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [vec3-class]: [vec3.h] vec3 definitions and helper functions]
We use `double` here, but some ray tracers use `float`. `double` has greater precision and range,
but is twice the size compared to `float`. This increase in size may be important if you're
programming in limited memory conditions (such as hardware shaders). Either one is fine -- follow
your own tastes.
Color Utility Functions
------------------------
Using our new `vec3` class, we'll create a new `color.h` header file and define a utility function
that writes a single pixel's color out to the standard output stream.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef COLOR_H
#define COLOR_H
#include "vec3.h"
#include
using color = vec3;
void write_color(std::ostream &out, color pixel_color) {
// Write the translated [0,255] value of each color component.
out << static_cast(255.999 * pixel_color.x()) << ' '
<< static_cast(255.999 * pixel_color.y()) << ' '
<< static_cast(255.999 * pixel_color.z()) << '\n';
}
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [color]: [color.h] color utility functions]
Now we can change our main to use both of these:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "color.h"
#include "vec3.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include
int main() {
// Image
int image_width = 256;
int image_height = 256;
// Render
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = 0; j < image_height; ++j) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto pixel_color = color(double(i)/(image_width-1), double(j)/(image_height-1), 0);
write_color(std::cout, pixel_color);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
}
std::clog << "\rDone. \n";
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ppm-2]: [main.cc] Final code for the first PPM image]
And you should get the exact same picture as before.
Rays, a Simple Camera, and Background
====================================================================================================
The ray Class
--------------
The one thing that all ray tracers have is a ray class and a computation of what color is seen along
a ray. Let’s think of a ray as a function $\mathbf{P}(t) = \mathbf{A} + t \mathbf{b}$. Here
$\mathbf{P}$ is a 3D position along a line in 3D. $\mathbf{A}$ is the ray origin and $\mathbf{b}$ is
the ray direction. The ray parameter $t$ is a real number (`double` in the code). Plug in a
different $t$ and $\mathbf{P}(t)$ moves the point along the ray. Add in negative $t$ values and you
can go anywhere on the 3D line. For positive $t$, you get only the parts in front of $\mathbf{A}$,
and this is what is often called a half-line or a ray.
![Figure [lerp]: Linear interpolation](../images/fig-1.02-lerp.jpg)
We can represent the idea of a ray as a class, and represent the function $\mathbf{P}(t)$ as a
function that we'll call `ray::at(t)`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef RAY_H
#define RAY_H
#include "vec3.h"
class ray {
public:
ray() {}
ray(const point3& origin, const vec3& direction) : orig(origin), dir(direction) {}
point3 origin() const { return orig; }
vec3 direction() const { return dir; }
point3 at(double t) const {
return orig + t*dir;
}
private:
point3 orig;
vec3 dir;
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-initial]: [ray.h] The ray class]
Sending Rays Into the Scene
----------------------------
Now we are ready to turn the corner and make a ray tracer.
At its core, a ray tracer sends rays through pixels and computes the color seen in the direction of
those rays. The involved steps are
1. Calculate the ray from the “eye” through the pixel,
2. Determine which objects the ray intersects, and
3. Compute a color for the closest intersection point.
When first developing a ray tracer, I always do a simple camera for getting the code up and running.
I’ve often gotten into trouble using square images for debugging because I transpose $x$ and $y$ too
often, so we’ll use a non-square image.
A square image has a 1∶1 aspect ratio, because its width is the same as its height.
Since we want a non-square image, we'll choose 16∶9 because it's so common.
A 16∶9 aspect ratio means that the ratio of image width to image height is 16∶9.
Put another way, given an image with a 16∶9 aspect ratio,
$$\text{width} / \text{height} = 16 / 9 = 1.7778$$
For a practical example, an image 800 pixels wide by 400 pixels high has a 2∶1 aspect ratio.
The image's aspect ratio can be determined from the ratio of its height to its width.
However, since we have a given aspect ratio in mind, it's easier to set the image's width and the
aspect ratio, and then using this to calculate for its height.
This way, we can scale up or down the image by changing the image width, and it won't throw off our
desired aspect ratio.
We do have to make sure that when we solve for the image height the resulting height is at least 1.
In addition to setting up the pixel dimensions for the rendered image, we also need to set up a
virtual _viewport_ through which to pass our scene rays.
The viewport is a virtual rectangle in the 3D world that contains the grid of image pixel locations.
If pixels are spaced the same distance horizontally as they are vertically, the viewport that
bounds them will have the same aspect ratio as the rendered image.
The distance between two adjacent pixels is called the pixel spacing, and square pixels is the
standard.
To start things off, we'll choose an arbitrary viewport height of 2.0, and scale the viewport width
to give us the desired aspect ratio.
Here's a snippet of what this code will look like:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto aspect_ratio = 16.0 / 9.0;
int image_width = 400;
// Calculate the image height, and ensure that it's at least 1.
int image_height = static_cast(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
// Viewport widths less than one are ok since they are real valued.
auto viewport_height = 2.0;
auto viewport_width = viewport_height * (static_cast(image_width)/image_height);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [image-setup]: Rendered image setup]
If you're wondering why we don't just use `aspect_ratio` when computing `viewport_width`, it's
because the value set to `aspect_ratio` is the ideal ratio, it may not be the _actual_ ratio
between `image_width` and `image_height`. If `image_height` was allowed to be real valued--rather
than just an integer--then it would fine to use `aspect_ratio`. But the _actual_ ratio
between `image_width` and `image_height` can vary based on two parts of the code. First,
`integer_height` is rounded down to the nearest integer, which can increase the ratio. Second, we
don't allow `integer_height` to be less than one, which can also change the actual aspect ratio.
Note that `aspect_ratio` is an ideal ratio, which we approximate as best as possible with the
integer-based ratio of image width over image height.
In order for our viewport proportions to exactly match our image proportions, we use the calculated
image aspect ratio to determine our final viewport width.
Next we will define the camera center: a point in 3D space from which all scene rays will originate
(this is also commonly referred to as the _eye point_).
The vector from the camera center to the viewport center will be orthogonal to the viewport.
We'll initially set the distance between the viewport and the camera center point to be one unit.
This distance is often referred to as the _focal length_.
For simplicity we'll start with the camera center at $(0,0,0)$.
We'll also have the y-axis go up, the x-axis to the right, and the negative z-axis pointing in the
viewing direction. (This is commonly referred to as _right-handed coordinates_.)
![Figure [camera-geom]: Camera geometry](../images/fig-1.03-cam-geom.jpg)
Now the inevitable tricky part.
While our 3D space has the conventions above, this conflicts with our image coordinates,
where we want to have the zeroth pixel in the top-left and work our way down to the last pixel at
the bottom right.
This means that our image coordinate Y-axis is inverted: Y increases going down the image.
As we scan our image, we will start at the upper left pixel (pixel $0,0$), scan left-to-right across
each row, and then scan row-by-row, top-to-bottom.
To help navigate the pixel grid, we'll use a vector from the left edge to the right edge
($\mathbf{V_u}$), and a vector from the upper edge to the lower edge ($\mathbf{V_v}$).
Our pixel grid will be inset from the viewport edges by half the pixel-to-pixel distance.
This way, our viewport area is evenly divided into width × height identical regions.
Here's what our viewport and pixel grid look like:
![Figure [pixel-grid]: Viewport and pixel grid](../images/fig-1.04-pixel-grid.jpg)
In this figure, we have the viewport, the pixel grid for a 7×5 resolution image, the viewport
upper left corner $\mathbf{Q}$, the pixel $\mathbf{P_{0,0}}$ location, the viewport vector
$\mathbf{V_u}$ (`viewport_u`), the viewport vector $\mathbf{V_v}$ (`viewport_v`), and the pixel
delta vectors $\mathbf{\Delta u}$ and $\mathbf{\Delta v}$.
Drawing from all of this, here's the code that implements the camera.
We'll stub in a function `ray_color(const ray& r)` that returns the color for a given scene ray
-- which we'll set to always return black for now.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "color.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "ray.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "vec3.h"
#include
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
color ray_color(const ray& r) {
return color(0,0,0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
// Image
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto aspect_ratio = 16.0 / 9.0;
int image_width = 400;
// Calculate the image height, and ensure that it's at least 1.
int image_height = static_cast(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
// Camera
auto focal_length = 1.0;
auto viewport_height = 2.0;
auto viewport_width = viewport_height * (static_cast(image_width)/image_height);
auto camera_center = point3(0, 0, 0);
// Calculate the vectors across the horizontal and down the vertical viewport edges.
auto viewport_u = vec3(viewport_width, 0, 0);
auto viewport_v = vec3(0, -viewport_height, 0);
// Calculate the horizontal and vertical delta vectors from pixel to pixel.
auto pixel_delta_u = viewport_u / image_width;
auto pixel_delta_v = viewport_v / image_height;
// Calculate the location of the upper left pixel.
auto viewport_upper_left = camera_center
- vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2;
auto pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
// Render
std::cout << "P3\n" << image_width << " " << image_height << "\n255\n";
for (int j = 0; j < image_height; ++j) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v);
auto ray_direction = pixel_center - camera_center;
ray r(camera_center, ray_direction);
color pixel_color = ray_color(r);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
write_color(std::cout, pixel_color);
}
}
std::clog << "\rDone. \n";
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [creating-rays]: [main.cc] Creating scene rays]
Notice that in the code above, I didn't make `ray_direction` a unit vector, because I think not
doing that makes for simpler and slightly faster code.
Now we'll fill in the `ray_color(ray)` function to implement a simple gradient.
This function will linearly blend white and blue depending on the height of the $y$ coordinate
_after_ scaling the ray direction to unit length (so $-1.0 < y < 1.0$).
Because we're looking at the $y$ height after normalizing the vector, you'll notice a horizontal
gradient to the color in addition to the vertical gradient.
I'll use a standard graphics trick to linearly scale $0.0 ≤ a ≤ 1.0$.
When $a = 1.0$, I want blue.
When $a = 0.0$, I want white.
In between, I want a blend.
This forms a “linear blend”, or “linear interpolation”.
This is commonly referred to as a _lerp_ between two values.
A lerp is always of the form
$$ \mathit{blendedValue} = (1-a)\cdot\mathit{startValue} + a\cdot\mathit{endValue}, $$
with $a$ going from zero to one.
Putting all this together, here's what we get:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "color.h"
#include "ray.h"
#include "vec3.h"
#include
color ray_color(const ray& r) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-blue-white-blend]: [main.cc] Rendering a blue-to-white gradient]
In our case this produces:
![Image 2: A blue-to-white gradient depending on ray Y coordinate
](../images/img-1.02-blue-to-white.png class='pixel')
Adding a Sphere
====================================================================================================
Let’s add a single object to our ray tracer. People often use spheres in ray tracers because
calculating whether a ray hits a sphere is relatively simple.
Ray-Sphere Intersection
------------------------
The equation for a sphere of radius $r$ that is centered at the origin is an important mathematical
equation:
$$ x^2 + y^2 + z^2 = r^2 $$
You can also think of this as saying that if a given point $(x,y,z)$ is on
the sphere, then $x^2 + y^2 + z^2 = r^2$. If a given point $(x,y,z)$ is _inside_ the sphere, then
$x^2 + y^2 + z^2 < r^2$, and if a given point $(x,y,z)$ is _outside_ the sphere, then
$x^2 + y^2 + z^2 > r^2$.
If we want to allow the sphere center to be at an arbitrary point $(C_x, C_y, C_z)$, then the
equation becomes a lot less nice:
$$ (x - C_x)^2 + (y - C_y)^2 + (z - C_z)^2 = r^2 $$
In graphics, you almost always want your formulas to be in terms of vectors so that all the
$x$/$y$/$z$ stuff can be simply represented using a `vec3` class. You might note that the vector
from center $\mathbf{C} = (C_x, C_y, C_z)$ to point $\mathbf{P} = (x,y,z)$ is
$(\mathbf{P} - \mathbf{C})$. If we use the definition of the dot product:
$$ (\mathbf{P} - \mathbf{C}) \cdot (\mathbf{P} - \mathbf{C})
= (x - C_x)^2 + (y - C_y)^2 + (z - C_z)^2
$$
Then we can rewrite the equation of the sphere in vector form as:
$$ (\mathbf{P} - \mathbf{C}) \cdot (\mathbf{P} - \mathbf{C}) = r^2 $$
We can read this as “any point $\mathbf{P}$ that satisfies this equation is on the sphere”. We want
to know if our ray $\mathbf{P}(t) = \mathbf{A} + t\mathbf{b}$ ever hits the sphere anywhere. If it
does hit the sphere, there is some $t$ for which $\mathbf{P}(t)$ satisfies the sphere equation. So
we are looking for any $t$ where this is true:
$$ (\mathbf{P}(t) - \mathbf{C}) \cdot (\mathbf{P}(t) - \mathbf{C}) = r^2 $$
which can be found by replacing $\mathbf{P}(t)$ with its expanded form:
$$ ((\mathbf{A} + t \mathbf{b}) - \mathbf{C})
\cdot ((\mathbf{A} + t \mathbf{b}) - \mathbf{C}) = r^2 $$
We have three vectors on the left dotted by three vectors on the right. If we solved for the full
dot product we would get nine vectors. You can definitely go through and write everything out, but
we don't need to work that hard. If you remember, we want to solve for $t$, so we'll separate the
terms based on whether there is a $t$ or not:
$$ (t \mathbf{b} + (\mathbf{A} - \mathbf{C}))
\cdot (t \mathbf{b} + (\mathbf{A} - \mathbf{C})) = r^2 $$
And now we follow the rules of vector algebra to distribute the dot product:
$$ t^2 \mathbf{b} \cdot \mathbf{b}
+ 2t \mathbf{b} \cdot (\mathbf{A}-\mathbf{C})
+ (\mathbf{A}-\mathbf{C}) \cdot (\mathbf{A}-\mathbf{C}) = r^2
$$
Move the square of the radius over to the left hand side:
$$ t^2 \mathbf{b} \cdot \mathbf{b}
+ 2t \mathbf{b} \cdot (\mathbf{A}-\mathbf{C})
+ (\mathbf{A}-\mathbf{C}) \cdot (\mathbf{A}-\mathbf{C}) - r^2 = 0
$$
It's hard to make out what exactly this equation is, but the vectors and $r$ in that equation are
all constant and known. Furthermore, the only vectors that we have are reduced to scalars by dot
product. The only unknown is $t$, and we have a $t^2$, which means that this equation is quadratic.
You can solve for a quadratic equation by using the quadratic formula:
$$ \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} $$
Where for ray-sphere intersection the $a$/$b$/$c$ values are:
$$ a = \mathbf{b} \cdot \mathbf{b} $$
$$ b = 2 \mathbf{b} \cdot (\mathbf{A}-\mathbf{C}) $$
$$ c = (\mathbf{A}-\mathbf{C}) \cdot (\mathbf{A}-\mathbf{C}) - r^2 $$
Using all of the above you can solve for $t$, but there is a square root part that can be either
positive (meaning two real solutions), negative (meaning no real solutions), or zero (meaning one
real solution). In graphics, the algebra almost always relates very directly to the geometry. What
we have is:
![Figure [ray-sphere]: Ray-sphere intersection results](../images/fig-1.05-ray-sphere.jpg)
Creating Our First Raytraced Image
-----------------------------------
If we take that math and hard-code it into our program, we can test our code by placing a small
sphere at -1 on the z-axis and then coloring red any pixel that intersects it.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
bool hit_sphere(const point3& center, double radius, const ray& r) {
vec3 oc = r.origin() - center;
auto a = dot(r.direction(), r.direction());
auto b = 2.0 * dot(oc, r.direction());
auto c = dot(oc, oc) - radius*radius;
auto discriminant = b*b - 4*a*c;
return (discriminant >= 0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
color ray_color(const ray& r) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (hit_sphere(point3(0,0,-1), 0.5, r))
return color(1, 0, 0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-red-sphere]: [main.cc] Rendering a red sphere]
What we get is this:
![Image 3: A simple red sphere
](../images/img-1.03-red-sphere.png class='pixel')
Now this lacks all sorts of things -- like shading, reflection rays, and more than one object --
but we are closer to halfway done than we are to our start! One thing to be aware of is that we
are testing to see if a ray intersects with the sphere by solving the quadratic equation and seeing
if a solution exists, but solutions with negative values of $t$ work just fine. If you change your
sphere center to $z = +1$ you will get exactly the same picture because this solution doesn't
distinguish between objects _in front of the camera_ and objects _behind the camera_. This is not a
feature! We’ll fix those issues next.
Surface Normals and Multiple Objects
====================================================================================================
Shading with Surface Normals
-----------------------------
First, let’s get ourselves a surface normal so we can shade. This is a vector that is perpendicular
to the surface at the point of intersection.
We have a key design decision to make for normal vectors in our code: whether normal vectors will
have an arbitrary length, or will be normalized to unit length.
It is tempting to skip the expensive square root operation involved in normalizing the vector, in
case it's not needed.
In practice, however, there are three important observations.
First, if a unit-length normal vector is _ever_ required, then you might as well do it up front
once, instead of over and over again "just in case" for every location where unit-length is
required.
Second, we _do_ require unit-length normal vectors in several places.
Third, if you require normal vectors to be unit length, then you can often efficiently generate that
vector with an understanding of the specific geometry class, in its constructor, or in the `hit()`
function.
For example, sphere normals can be made unit length simply by dividing by the sphere radius,
avoiding the square root entirely.
Given all of this, we will adopt the policy that all normal vectors will be of unit length.
For a sphere, the outward normal is in the direction of the hit point minus the center:
![Figure [sphere-normal]: Sphere surface-normal geometry](../images/fig-1.06-sphere-normal.jpg)
On the earth, this means that the vector from the earth’s center to you points straight up. Let’s
throw that into the code now, and shade it. We don’t have any lights or anything yet, so let’s just
visualize the normals with a color map.
A common trick used for visualizing normals (because it’s easy and somewhat intuitive to assume
$\mathbf{n}$ is a unit length vector -- so each component is between -1 and 1) is to map each
component to the interval from 0 to 1, and then map $(x, y, z)$ to $(\mathit{red}, \mathit{green},
\mathit{blue})$.
For the normal, we need the hit point, not just whether we hit or not (which is all we're
calculating at the moment).
We only have one sphere in the scene, and it's directly in front of the camera, so we won't worry
about negative values of $t$ yet.
We'll just assume the closest hit point (smallest $t$) is the one that we want.
These changes in the code let us compute and visualize $\mathbf{n}$:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double hit_sphere(const point3& center, double radius, const ray& r) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 oc = r.origin() - center;
auto a = dot(r.direction(), r.direction());
auto b = 2.0 * dot(oc, r.direction());
auto c = dot(oc, oc) - radius*radius;
auto discriminant = b*b - 4*a*c;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (discriminant < 0) {
return -1.0;
} else {
return (-b - sqrt(discriminant) ) / (2.0*a);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
color ray_color(const ray& r) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto t = hit_sphere(point3(0,0,-1), 0.5, r);
if (t > 0.0) {
vec3 N = unit_vector(r.at(t) - vec3(0,0,-1));
return 0.5*color(N.x()+1, N.y()+1, N.z()+1);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [render-surface-normal]: [main.cc] Rendering surface normals on a sphere]
And that yields this picture:
![Image 4: A sphere colored according to its normal vectors
](../images/img-1.04-normals-sphere.png class='pixel')
Simplifying the Ray-Sphere Intersection Code
---------------------------------------------
Let’s revisit the ray-sphere function:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
double hit_sphere(const point3& center, double radius, const ray& r) {
vec3 oc = r.origin() - center;
auto a = dot(r.direction(), r.direction());
auto b = 2.0 * dot(oc, r.direction());
auto c = dot(oc, oc) - radius*radius;
auto discriminant = b*b - 4*a*c;
if (discriminant < 0) {
return -1.0;
} else {
return (-b - sqrt(discriminant) ) / (2.0*a);
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-sphere-before]: [main.cc] Ray-sphere intersection code (before)]
First, recall that a vector dotted with itself is equal to the squared length of that vector.
Second, notice how the equation for `b` has a factor of two in it. Consider what happens to the
quadratic equation if $b = 2h$:
$$ \frac{-b \pm \sqrt{b^2 - 4ac}}{2a} $$
$$ = \frac{-2h \pm \sqrt{(2h)^2 - 4ac}}{2a} $$
$$ = \frac{-2h \pm 2\sqrt{h^2 - ac}}{2a} $$
$$ = \frac{-h \pm \sqrt{h^2 - ac}}{a} $$
Using these observations, we can now simplify the sphere-intersection code to this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
double hit_sphere(const point3& center, double radius, const ray& r) {
vec3 oc = r.origin() - center;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto a = r.direction().length_squared();
auto half_b = dot(oc, r.direction());
auto c = oc.length_squared() - radius*radius;
auto discriminant = half_b*half_b - a*c;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
if (discriminant < 0) {
return -1.0;
} else {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
return (-half_b - sqrt(discriminant) ) / a;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-sphere-after]: [main.cc] Ray-sphere intersection code (after)]
An Abstraction for Hittable Objects
------------------------------------
Now, how about more than one sphere? While it is tempting to have an array of spheres, a very clean
solution is to make an “abstract class” for anything a ray might hit, and make both a sphere and a
list of spheres just something that can be hit. What that class should be called is something of a
quandary -- calling it an “object” would be good if not for “object oriented” programming.
“Surface” is often used, with the weakness being maybe we will want volumes (fog, clouds, stuff
like that). “hittable” emphasizes the member function that unites them. I don’t love any of these,
but we'll go with “hittable”.
This `hittable` abstract class will have a `hit` function that takes in a ray. Most ray tracers
have found it convenient to add a valid interval for hits $t_{\mathit{min}}$ to $t_{\mathit{max}}$,
so the hit only “counts” if $t_{\mathit{min}} < t < t_{\mathit{max}}$. For the initial rays this is
positive $t$, but as we will see, it can simplify our code to have an interval
$t_{\mathit{min}}$ to $t_{\mathit{max}}$. One design question is whether to do things like compute
the normal if we hit something. We might end up hitting something closer as we do our search, and we
will only need the normal of the closest thing. I will go with the simple solution and compute a
bundle of stuff I will store in some structure. Here’s the abstract class:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef HITTABLE_H
#define HITTABLE_H
#include "ray.h"
class hit_record {
public:
point3 p;
vec3 normal;
double t;
};
class hittable {
public:
virtual ~hittable() = default;
virtual bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const = 0;
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hittable-initial]: [hittable.h] The hittable class]
And here’s the sphere:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef SPHERE_H
#define SPHERE_H
#include "hittable.h"
#include "vec3.h"
class sphere : public hittable {
public:
sphere(point3 _center, double _radius) : center(_center), radius(_radius) {}
bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const override {
vec3 oc = r.origin() - center;
auto a = r.direction().length_squared();
auto half_b = dot(oc, r.direction());
auto c = oc.length_squared() - radius*radius;
auto discriminant = half_b*half_b - a*c;
if (discriminant < 0) return false;
auto sqrtd = sqrt(discriminant);
// Find the nearest root that lies in the acceptable range.
auto root = (-half_b - sqrtd) / a;
if (root <= ray_tmin || ray_tmax <= root) {
root = (-half_b + sqrtd) / a;
if (root <= ray_tmin || ray_tmax <= root)
return false;
}
rec.t = root;
rec.p = r.at(rec.t);
rec.normal = (rec.p - center) / radius;
return true;
}
private:
point3 center;
double radius;
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-initial]: [sphere.h] The sphere class]
Front Faces Versus Back Faces
------------------------------
The second design decision for normals is whether they should always point out. At present, the
normal found will always be in the direction of the center to the intersection point (the normal
points out). If the ray intersects the sphere from the outside, the normal points against the ray.
If the ray intersects the sphere from the inside, the normal (which always points out) points with
the ray. Alternatively, we can have the normal always point against the ray. If the ray is outside
the sphere, the normal will point outward, but if the ray is inside the sphere, the normal will
point inward.
![Figure [normal-sides]: Possible directions for sphere surface-normal geometry
](../images/fig-1.07-normal-sides.jpg)
We need to choose one of these possibilities because we will eventually want to determine which
side of the surface that the ray is coming from. This is important for objects that are rendered
differently on each side, like the text on a two-sided sheet of paper, or for objects that have an
inside and an outside, like glass balls.
If we decide to have the normals always point out, then we will need to determine which side the
ray is on when we color it. We can figure this out by comparing the ray with the normal. If the ray
and the normal face in the same direction, the ray is inside the object, if the ray and the normal
face in the opposite direction, then the ray is outside the object. This can be determined by
taking the dot product of the two vectors, where if their dot is positive, the ray is inside the
sphere.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
if (dot(ray_direction, outward_normal) > 0.0) {
// ray is inside the sphere
...
} else {
// ray is outside the sphere
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-normal-comparison]: Comparing the ray and the normal]
If we decide to have the normals always point against the ray, we won't be able to use the dot
product to determine which side of the surface the ray is on. Instead, we would need to store that
information:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool front_face;
if (dot(ray_direction, outward_normal) > 0.0) {
// ray is inside the sphere
normal = -outward_normal;
front_face = false;
} else {
// ray is outside the sphere
normal = outward_normal;
front_face = true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [normals-point-against]: Remembering the side of the surface]
We can set things up so that normals always point “outward” from the surface, or always point
against the incident ray. This decision is determined by whether you want to determine the side of
the surface at the time of geometry intersection or at the time of coloring. In this book we have
more material types than we have geometry types, so we'll go for less work and put the determination
at geometry time. This is simply a matter of preference, and you'll see both implementations in the
literature.
We add the `front_face` bool to the `hit_record` class.
We'll also add a function to solve this calculation for us: `set_face_normal()`.
For convenience we will assume that the vector passed to the new `set_face_normal()` function is of
unit length.
We could always normalize the parameter explicitly, but it's more efficient if the geometry code
does this, as it's usually easier when you know more about the specific geometry.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class hit_record {
public:
point3 p;
vec3 normal;
double t;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
bool front_face;
void set_face_normal(const ray& r, const vec3& outward_normal) {
// Sets the hit record normal vector.
// NOTE: the parameter `outward_normal` is assumed to have unit length.
front_face = dot(r.direction(), outward_normal) < 0;
normal = front_face ? outward_normal : -outward_normal;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [front-face-tracking]: [hittable.h]
Adding front-face tracking to hit_record]
And then we add the surface side determination to the class:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class sphere : public hittable {
public:
...
bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const {
...
rec.t = root;
rec.p = r.at(rec.t);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 outward_normal = (rec.p - center) / radius;
rec.set_face_normal(r, outward_normal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return true;
}
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-final]: [sphere.h] The sphere class with normal determination]
A List of Hittable Objects
---------------------------
We have a generic object called a `hittable` that the ray can intersect with. We now add a class
that stores a list of `hittable`s:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef HITTABLE_LIST_H
#define HITTABLE_LIST_H
#include "hittable.h"
#include
#include
using std::shared_ptr;
using std::make_shared;
class hittable_list : public hittable {
public:
std::vector> objects;
hittable_list() {}
hittable_list(shared_ptr object) { add(object); }
void clear() { objects.clear(); }
void add(shared_ptr object) {
objects.push_back(object);
}
bool hit(const ray& r, double ray_tmin, double ray_tmax, hit_record& rec) const override {
hit_record temp_rec;
bool hit_anything = false;
auto closest_so_far = ray_tmax;
for (const auto& object : objects) {
if (object->hit(r, ray_tmin, closest_so_far, temp_rec)) {
hit_anything = true;
closest_so_far = temp_rec.t;
rec = temp_rec;
}
}
return hit_anything;
}
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hittable-list-initial]: [hittable_list.h] The hittable_list class]
Some New C++ Features
----------------------
The `hittable_list` class code uses two C++ features that may trip you up if you're not normally a
C++ programmer: `vector` and `shared_ptr`.
`shared_ptr` is a pointer to some allocated type, with reference-counting semantics.
Every time you assign its value to another shared pointer (usually with a simple assignment), the
reference count is incremented. As shared pointers go out of scope (like at the end of a block or
function), the reference count is decremented. Once the count goes to zero, the object is safely
deleted.
Typically, a shared pointer is first initialized with a newly-allocated object, something like this:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
shared_ptr double_ptr = make_shared(0.37);
shared_ptr vec3_ptr = make_shared(1.414214, 2.718281, 1.618034);
shared_ptr sphere_ptr = make_shared(point3(0,0,0), 1.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [shared-ptr]: An example allocation using shared_ptr]
`make_shared(thing_constructor_params ...)` allocates a new instance of type `thing`, using
the constructor parameters. It returns a `shared_ptr`.
Since the type can be automatically deduced by the return type of `make_shared(...)`, the
above lines can be more simply expressed using C++'s `auto` type specifier:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto double_ptr = make_shared(0.37);
auto vec3_ptr = make_shared(1.414214, 2.718281, 1.618034);
auto sphere_ptr = make_shared(point3(0,0,0), 1.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [shared-ptr-auto]: An example allocation using shared_ptr with auto type]
We'll use shared pointers in our code, because it allows multiple geometries to share a common
instance (for example, a bunch of spheres that all use the same color material), and because
it makes memory management automatic and easier to reason about.
`std::shared_ptr` is included with the `` header.
The second C++ feature you may be unfamiliar with is `std::vector`. This is a generic array-like
collection of an arbitrary type. Above, we use a collection of pointers to `hittable`. `std::vector`
automatically grows as more values are added: `objects.push_back(object)` adds a value to the end of
the `std::vector` member variable `objects`.
`std::vector` is included with the `` header.
Finally, the `using` statements in listing [hittable-list-initial] tell the compiler that we'll be
getting `shared_ptr` and `make_shared` from the `std` library, so we don't need to prefix these with
`std::` every time we reference them.
Common Constants and Utility Functions
---------------------------------------
We need some math constants that we conveniently define in their own header file. For now we only
need infinity, but we will also throw our own definition of pi in there, which we will need later.
There is no standard portable definition of pi, so we just define our own constant for it. We'll
throw common useful constants and future utility functions in `rtweekend.h`, our general main header
file.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef RTWEEKEND_H
#define RTWEEKEND_H
#include
#include
#include
// Usings
using std::shared_ptr;
using std::make_shared;
using std::sqrt;
// Constants
const double infinity = std::numeric_limits::infinity();
const double pi = 3.1415926535897932385;
// Utility Functions
inline double degrees_to_radians(double degrees) {
return degrees * pi / 180.0;
}
// Common Headers
#include "ray.h"
#include "vec3.h"
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [rtweekend-initial]: [rtweekend.h] The rtweekend.h common header]
And the new main:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "rtweekend.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "color.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "hittable.h"
#include "hittable_list.h"
#include "sphere.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ delete
double hit_sphere(const point3& center, double radius, const ray& r) {
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
color ray_color(const ray& r, const hittable& world) {
hit_record rec;
if (world.hit(r, 0, infinity, rec)) {
return 0.5 * (rec.normal + color(1,1,1));
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
int main() {
// Image
auto aspect_ratio = 16.0 / 9.0;
int image_width = 400;
// Calculate the image height, and ensure that it's at least 1.
int image_height = static_cast(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
// World
hittable_list world;
world.add(make_shared(point3(0,0,-1), 0.5));
world.add(make_shared(point3(0,-100.5,-1), 100));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
// Camera
auto focal_length = 1.0;
auto viewport_height = 2.0;
auto viewport_width = viewport_height * (static_cast(image_width)/image_height);
auto camera_center = point3(0, 0, 0);
// Calculate the vectors across the horizontal and down the vertical viewport edges.
auto viewport_u = vec3(viewport_width, 0, 0);
auto viewport_v = vec3(0, -viewport_height, 0);
// Calculate the horizontal and vertical delta vectors from pixel to pixel.
auto pixel_delta_u = viewport_u / image_width;
auto pixel_delta_v = viewport_v / image_height;
// Calculate the location of the upper left pixel.
auto viewport_upper_left = camera_center
- vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2;
auto pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
// Render
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = 0; j < image_height; ++j) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v);
auto ray_direction = pixel_center - camera_center;
ray r(camera_center, ray_direction);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
color pixel_color = ray_color(r, world);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
write_color(std::cout, pixel_color);
}
}
std::clog << "\rDone. \n";
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-with-rtweekend-h]: [main.cc] The new main with hittables]
This yields a picture that is really just a visualization of where the spheres are located along
with their surface normal. This is often a great way to view any flaws or specific characteristics
of a geometric model.
![Image 5: Resulting render of normals-colored sphere with ground
](../images/img-1.05-normals-sphere-ground.png class='pixel')
An Interval Class
------------------
Before we continue, we'll implement an interval class to manage real-valued intervals with a minimum
and a maximum. We'll end up using this class quite often as we proceed.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef INTERVAL_H
#define INTERVAL_H
class interval {
public:
double min, max;
interval() : min(+infinity), max(-infinity) {} // Default interval is empty
interval(double _min, double _max) : min(_min), max(_max) {}
bool contains(double x) const {
return min <= x && x <= max;
}
bool surrounds(double x) const {
return min < x && x < max;
}
static const interval empty, universe;
};
const static interval empty (+infinity, -infinity);
const static interval universe(-infinity, +infinity);
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [interval-initial]: [interval.h] Introducing the new interval class]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
// Common Headers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
#include "interval.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "ray.h"
#include "vec3.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [interval-rtweekend]: [rtweekend.h] Including the new interval class]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class hittable {
public:
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
virtual bool hit(const ray& r, interval ray_t, hit_record& rec) const = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hittable-with-interval]: [hittable.h] hittable::hit() using interval]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class hittable_list : public hittable {
public:
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
bool hit(const ray& r, interval ray_t, hit_record& rec) const override {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
hit_record temp_rec;
bool hit_anything = false;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto closest_so_far = ray_t.max;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
for (const auto& object : objects) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (object->hit(r, interval(ray_t.min, closest_so_far), temp_rec)) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
hit_anything = true;
closest_so_far = temp_rec.t;
rec = temp_rec;
}
}
return hit_anything;
}
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hittable-list-with-interval]: [hittable_list.h]
hittable_list::hit() using interval]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class sphere : public hittable {
public:
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
bool hit(const ray& r, interval ray_t, hit_record& rec) const override {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
// Find the nearest root that lies in the acceptable range.
auto root = (-half_b - sqrtd) / a;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (!ray_t.surrounds(root)) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
root = (-half_b + sqrtd) / a;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (!ray_t.surrounds(root))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return false;
}
...
}
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-with-interval]: [sphere.h] sphere using interval]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
color ray_color(const ray& r, const hittable& world) {
hit_record rec;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (world.hit(r, interval(0, infinity), rec)) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return 0.5 * (rec.normal + color(1,1,1));
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-with-interval]: [main.cc] The new main using interval]
Moving Camera Code Into Its Own Class
====================================================================================================
Before continuing, now is a good time to consolidate our camera and scene-render code into a single
new class: the `camera` class.
The camera class will be responsible for two important jobs:
1. Construct and dispatch rays into the world.
2. Use the results of these rays to construct the rendered image.
In this refactoring, we'll collect the `ray_color()` function, along with the image, camera, and
render sections of our main program.
The new camera class will contain two public methods `initialize()` and `render()`, plus two
private helper methods `get_ray()` and `ray_color()`.
Ultimately, the camera will follow the simplest usage pattern that we could think of: it will be
default constructed no arguments, then the owning code will modify the camera's public variables
through simple assignment, and finally everything is initialized by a call to the `initialize()`
function. This pattern is chosen instead of the owner calling a constructor with a ton of parameters
or by defining and calling a bunch of setter methods. Instead, the owning code only needs to set
what it explicitly cares about. Finally, we could either have the owning code call `initialize()`,
or just have the camera call this function automatically at the start of `render()`. We'll use the
second approach.
After main creates a camera and sets default values, it will call the `render()` method.
The `render()` method will prepare the camera for rendering and then execute the render loop.
Here's the skeleton of our new `camera` class:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef CAMERA_H
#define CAMERA_H
#include "rtweekend.h"
#include "color.h"
#include "hittable.h"
class camera {
public:
/* Public Camera Parameters Here */
void render(const hittable& world) {
...
}
private:
/* Private Camera Variables Here */
void initialize() {
...
}
color ray_color(const ray& r, const hittable& world) const {
...
}
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-skeleton]: [camera.h] The camera class skeleton]
To begin with, let's fill in the `ray_color()` function from `main.cc`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
...
private:
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
color ray_color(const ray& r, const hittable& world) const {
hit_record rec;
if (world.hit(r, interval(0, infinity), rec)) {
return 0.5 * (rec.normal + color(1,1,1));
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-ray-color]: [camera.h] The camera::ray_color function]
Now we move almost everything from the `main()` function into our new camera class.
The only thing remaining in the `main()` function is the world construction.
Here's the camera class with newly migrated code:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
#include "rtweekend.h"
#include "color.h"
#include "hittable.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double aspect_ratio = 1.0; // Ratio of image width over height
int image_width = 100; // Rendered image width in pixel count
void render(const hittable& world) {
initialize();
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = 0; j < image_height; ++j) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v);
auto ray_direction = pixel_center - center;
ray r(center, ray_direction);
color pixel_color = ray_color(r, world);
write_color(std::cout, pixel_color);
}
}
std::clog << "\rDone. \n";
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
private:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
int image_height; // Rendered image height
point3 center; // Camera center
point3 pixel00_loc; // Location of pixel 0, 0
vec3 pixel_delta_u; // Offset to pixel to the right
vec3 pixel_delta_v; // Offset to pixel below
void initialize() {
image_height = static_cast(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
center = point3(0, 0, 0);
// Determine viewport dimensions.
auto focal_length = 1.0;
auto viewport_height = 2.0;
auto viewport_width = viewport_height * (static_cast(image_width)/image_height);
// Calculate the vectors across the horizontal and down the vertical viewport edges.
auto viewport_u = vec3(viewport_width, 0, 0);
auto viewport_v = vec3(0, -viewport_height, 0);
// Calculate the horizontal and vertical delta vectors from pixel to pixel.
pixel_delta_u = viewport_u / image_width;
pixel_delta_v = viewport_v / image_height;
// Calculate the location of the upper left pixel.
auto viewport_upper_left =
center - vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2;
pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
color ray_color(const ray& r, const hittable& world) const {
...
}
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-working]: [camera.h] The working camera class]
And here's the much reduced main:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "rtweekend.h"
#include "camera.h"
#include "hittable_list.h"
#include "sphere.h"
int main() {
hittable_list world;
world.add(make_shared(point3(0,0,-1), 0.5));
world.add(make_shared(point3(0,-100.5,-1), 100));
camera cam;
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 400;
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-with-new-camera]: [main.cc] The new main, using the new camera]
Running this newly refactored program should give us the same rendered image as before.
Antialiasing
====================================================================================================
If you zoom into the rendered images so far, you might notice the harsh "stair step" nature of edges
in our rendered images.
This stair-stepping is commonly referred to as "aliasing", or "jaggies".
When a real camera takes a picture, there are usually no jaggies along edges, because the edge
pixels are a blend of some foreground and some background.
Consider that unlike our rendered images, a true image of the world is continuous.
Put another way, the world (and any true image of it) has effectively infinite resolution.
We can get the same effect by averaging a bunch of samples for each pixel.
With a single ray through the center of each pixel, we are performing what is commonly called _point
sampling_.
The problem with point sampling can be illustrated by rendering a small checkerboard far away.
If this checkerboard consists of an 8×8 grid of black and white tiles, but only four rays hit
it, then all four rays might intersect only white tiles, or only black, or some odd combination.
In the real world, when we perceive a checkerboard far away with our eyes, we perceive it as a gray
color, instead of sharp points of black and white.
That's because our eyes are naturally doing what we want our ray tracer to do: integrate the
(continuous function of) light falling on a particular (discrete) region of our rendered image.
Clearly we don't gain anything by just resampling the same ray through the pixel center multiple
times -- we'd just get the same result each time.
Instead, we want to sample the light falling _around_ the pixel, and then integrate those samples to
approximate the true continuous result.
So, how do we integrate the light falling around the pixel?
We'll adopt the simplest model: sampling the square region centered at the pixel that extends
halfway to each of the four neighboring pixels.
This is not the optimal approach, but it is the most straight-forward.
(See [_A Pixel is Not a Little Square_][square-pixels] for a deeper dive into this topic.)
![Figure [pixel-samples]: Pixel samples](../images/fig-1.08-pixel-samples.jpg)
Some Random Number Utilities
-----------------------------
We're going to need need a random number generator that returns real random numbers.
This function should return a canonical random number, which by convention falls in the range
$0 ≤ n < 1$.
The “less than” before the 1 is important, as we will sometimes take advantage of that.
A simple approach to this is to use the `rand()` function that can be found in ``, which
returns a random integer in the range 0 and `RAND_MAX`.
Hence we can get a real random number as desired with the following code snippet, added to
`rtweekend.h`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
#include
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include
#include
...
// Utility Functions
inline double degrees_to_radians(double degrees) {
return degrees * pi / 180.0;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
inline double random_double() {
// Returns a random real in [0,1).
return rand() / (RAND_MAX + 1.0);
}
inline double random_double(double min, double max) {
// Returns a random real in [min,max).
return min + (max-min)*random_double();
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [random-double]: [rtweekend.h] random_double() functions]
C++ did not traditionally have a standard random number generator, but newer versions of C++ have
addressed this issue with the `` header (if imperfectly according to some experts).
If you want to use this, you can obtain a random number with the conditions we need as follows:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include
inline double random_double() {
static std::uniform_real_distribution distribution(0.0, 1.0);
static std::mt19937 generator;
return distribution(generator);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [random-double-alt]: [rtweekend.h] random_double(), alternate implemenation]
Generating Pixels with Multiple Samples
----------------------------------------
For a single pixel composed of multiple samples, we'll select samples from the area surrounding the
pixel and average the resulting light (color) values together.
First we'll update the `write_color()` function to account for the number of samples we use: we need
to find the average across all of the samples that we take.
To do this, we'll add the full color from each iteration, and then finish with a single division (by
the number of samples) at the end, before writing out the color.
To ensure that the color components of the final result remain within the proper $[0,1]$ bounds,
we'll add and use a small helper function: `interval::clamp(x)`.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class interval {
public:
...
bool surrounds(double x) const {
return min < x && x < max;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double clamp(double x) const {
if (x < min) return min;
if (x > max) return max;
return x;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [clamp]: [interval.h] The interval::clamp() utility function]
And here's the updated `write_color()` function that takes the sum total of all light for the pixel
and the number of samples involved:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
void write_color(std::ostream &out, color pixel_color, int samples_per_pixel) {
auto r = pixel_color.x();
auto g = pixel_color.y();
auto b = pixel_color.z();
// Divide the color by the number of samples.
auto scale = 1.0 / samples_per_pixel;
r *= scale;
g *= scale;
b *= scale;
// Write the translated [0,255] value of each color component.
static const interval intensity(0.000, 0.999);
out << static_cast(256 * intensity.clamp(r)) << ' '
<< static_cast(256 * intensity.clamp(g)) << ' '
<< static_cast(256 * intensity.clamp(b)) << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [write-color-clamped]: [color.h] The multi-sample write_color() function]
Now let's update the camera class to define and use a new `camera::get_ray(i,j)` function, which
will generate different samples for each pixel.
This function will use a new helper function `pixel_sample_square()` that generates a random sample
point within the unit square centered at the origin.
We then transform the random sample from this ideal square back to the particular pixel we're
currently sampling.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
double aspect_ratio = 1.0; // Ratio of image width over height
int image_width = 100; // Rendered image width in pixel count
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
int samples_per_pixel = 10; // Count of random samples for each pixel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
void render(const hittable& world) {
initialize();
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = 0; j < image_height; ++j) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
color pixel_color(0,0,0);
for (int sample = 0; sample < samples_per_pixel; ++sample) {
ray r = get_ray(i, j);
pixel_color += ray_color(r, world);
}
write_color(std::cout, pixel_color, samples_per_pixel);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
}
std::clog << "\rDone. \n";
}
...
private:
...
void initialize() {
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
ray get_ray(int i, int j) const {
// Get a randomly sampled camera ray for the pixel at location i,j.
auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v);
auto pixel_sample = pixel_center + pixel_sample_square();
auto ray_origin = center;
auto ray_direction = pixel_sample - ray_origin;
return ray(ray_origin, ray_direction);
}
vec3 pixel_sample_square() const {
// Returns a random point in the square surrounding a pixel at the origin.
auto px = -0.5 + random_double();
auto py = -0.5 + random_double();
return (px * pixel_delta_u) + (py * pixel_delta_v);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-spp]: [camera.h] Camera with samples-per-pixel parameter]
(In addition to the new `pixel_sample_square()` function above, you'll also find the function
`pixel_sample_disk()` in the Github source code. This is included in case you'd like to experiment
with non-square pixels, but we won't be using it in this book. `pixel_sample_disk()` depends on the
function `random_in_unit_disk()` which is defined later on.)
Main is updated to set the new camera parameter.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
...
camera cam;
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 400;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
cam.samples_per_pixel = 100;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-spp]: [main.cc] Setting the new samples-per-pixel parameter]
Zooming into the image that is produced, we can see the difference in edge pixels.
![Image 6: Before and after antialiasing
](../images/img-1.06-antialias-before-after.png class='pixel')
Diffuse Materials
====================================================================================================
Now that we have objects and multiple rays per pixel, we can make some realistic looking materials.
We’ll start with diffuse materials (also called _matte_). One question is whether we mix and match
geometry and materials (so that we can assign a material to multiple spheres, or vice versa) or if
geometry and materials are tightly bound (which could be useful for procedural objects where the
geometry and material are linked). We’ll go with separate -- which is usual in most renderers -- but
do be aware that there are alternative approaches.
A Simple Diffuse Material
--------------------------
Diffuse objects that don’t emit their own light merely take on the color of their surroundings, but
they do modulate that with their own intrinsic color. Light that reflects off a diffuse surface has
its direction randomized, so, if we send three rays into a crack between two diffuse surfaces they will
each have different random behavior:
![Figure [light-bounce]: Light ray bounces](../images/fig-1.09-light-bounce.jpg)
They might also be absorbed rather than reflected. The darker the surface, the more likely
the ray is absorbed (that’s why it's dark!). Really any algorithm that randomizes direction will
produce surfaces that look matte. Let's start with the most intuitive: a surface that randomly
bounces a ray equally in all directions. For this material, a ray that hits the surface has an
equal probability of bouncing in any direction away from the surface.
![Figure [random-vec-hor]: Equal reflection above the horizon
](../images/fig-1.10-random-vec-horizon.jpg)
This very intuitive material is the simplest kind of diffuse and -- indeed -- many of the first
raytracing papers used this diffuse method (before adopting a more accurate method that we'll be
implementing a little bit later). We don't currently have a way to randomly reflect a ray, so we'll
need to add a few functions to our vector utility header. The first thing we need is the ability to
generate arbitrary random vectors:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class vec3 {
public:
...
double length_squared() const {
return e[0]*e[0] + e[1]*e[1] + e[2]*e[2];
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
static vec3 random() {
return vec3(random_double(), random_double(), random_double());
}
static vec3 random(double min, double max) {
return vec3(random_double(min,max), random_double(min,max), random_double(min,max));
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [vec-rand-util]: [vec3.h] vec3 random utility functions]
Then we need to figure out how to manipulate a random vector so that we only get results that are on
the surface of a hemisphere. There are analytical methods of doing this, but they are actually
surprisingly complicated to understand, and quite a bit complicated to implement. Instead, we'll use
what is typically the easiest algorithm: A rejection method. A rejection method works by repeatedly
generating random samples until we produce a sample that meets the desired criteria. In other words,
keep rejecting samples until you find a good one.
There are many equally valid ways of generating a random vector on a hemisphere using the rejection
method, but for our purposes we will go with the simplest, which is:
1. Generate a random vector inside of the unit sphere
2. Normalize this vector
3. Invert the normalized vector if it falls onto the wrong hemisphere
First, we will use a rejection method to generate the random vector inside of the unit sphere. Pick
a random point in the unit cube, where $x$, $y$, and $z$ all range from -1 to +1, and reject this
point if it is outside the unit sphere.
![Figure [sphere-vec]: Two vectors were rejected before finding a good one
](../images/fig-1.11-sphere-vec.jpg)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
inline vec3 unit_vector(vec3 v) {
return v / v.length();
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
inline vec3 random_in_unit_sphere() {
while (true) {
auto p = vec3::random(-1,1);
if (p.length_squared() < 1)
return p;
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [random-in-unit-sphere]: [vec3.h] The random_in_unit_sphere() function]
Once we have a random vector in the unit sphere we need to normalize it to get a vector _on_ the
unit sphere.
![Figure [sphere-vec]: The accepted random vector is normalized to produce a unit vector
](../images/fig-1.12-sphere-unit-vec.jpg)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
inline vec3 random_in_unit_sphere() {
while (true) {
auto p = vec3::random(-1,1);
if (p.length_squared() < 1)
return p;
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
inline vec3 random_unit_vector() {
return unit_vector(random_in_unit_sphere());
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [random-unit-vec]: [vec3.h] Random vector on the unit sphere]
And now that we have a random vector on the surface of the unit sphere, we can determine if it is on
the correct hemisphere by comparing against the surface normal:
![Figure [normal-hor]: The normal vector tells us which hemisphere we need
](../images/fig-1.13-surface-normal.jpg)
We can take the dot product of the surface normal and our random vector to determine if it's in the
correct hemisphere. If the dot product is positive, then the vector is in the correct hemisphere. If
the dot product is negative, then we need to invert the vector.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
inline vec3 random_unit_vector() {
return unit_vector(random_in_unit_sphere());
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
inline vec3 random_on_hemisphere(const vec3& normal) {
vec3 on_unit_sphere = random_unit_vector();
if (dot(on_unit_sphere, normal) > 0.0) // In the same hemisphere as the normal
return on_unit_sphere;
else
return -on_unit_sphere;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [random-in-hemisphere]: [vec3.h] The random_in_hemisphere() function]
If a ray bounces off of a material and keeps 100% of its color, then we say that the material is
_white_. If a ray bounces off of a material and keeps 0% of its color, then we say that the
material is black. As a first demonstration of our new diffuse material we'll set the `ray_color`
function to return 50% of the color from a bounce. We should expect to get a nice gray color.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
...
private:
...
color ray_color(const ray& r, const hittable& world) const {
hit_record rec;
if (world.hit(r, interval(0, infinity), rec)) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 direction = random_on_hemisphere(rec.normal);
return 0.5 * ray_color(ray(rec.p, direction), world);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-random-unit]: [camera.h]
ray_color() using a random ray direction]
... Indeed we do get rather nice gray spheres:
![Image 7: First render of a diffuse sphere
](../images/img-1.07-first-diffuse.png class='pixel')
Limiting the Number of Child Rays
----------------------------------
There's one potential problem lurking here. Notice that the `ray_color` function is recursive. When
will it stop recursing? When it fails to hit anything. In some cases, however, that may be a long
time — long enough to blow the stack. To guard against that, let's limit the maximum recursion
depth, returning no light contribution at the maximum depth:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
double aspect_ratio = 1.0; // Ratio of image width over height
int image_width = 100; // Rendered image width in pixel count
int samples_per_pixel = 10; // Count of random samples for each pixel
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
int max_depth = 10; // Maximum number of ray bounces into scene
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
void render(const hittable& world) {
initialize();
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = 0; j < image_height; ++j) {
std::clog << "\rScanlines remaining: " << (image_height - j) << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
color pixel_color(0,0,0);
for (int sample = 0; sample < samples_per_pixel; ++sample) {
ray r = get_ray(i, j);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
pixel_color += ray_color(r, max_depth, world);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
write_color(std::cout, pixel_color, samples_per_pixel);
}
}
std::clog << "\rDone. \n";
}
...
private:
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
color ray_color(const ray& r, int depth, const hittable& world) const {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
hit_record rec;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
return color(0,0,0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
if (world.hit(r, interval(0, infinity), rec)) {
vec3 direction = random_on_hemisphere(rec.normal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
return 0.5 * ray_color(ray(rec.p, direction), depth-1, world);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-depth]: [camera.h] camera::ray_color() with depth limiting]
Update the main() function to use this new depth limit:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
...
camera cam;
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 400;
cam.samples_per_pixel = 100;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
cam.max_depth = 50;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-ray-depth]: [main.cc] Using the new ray depth limiting]
For this very simple scene we should get basically the same result:
![Image 8: Second render of a diffuse sphere with limited bounces
](../images/img-1.08-second-diffuse.png class='pixel')
Fixing Shadow Acne
-------------------
There’s also a subtle bug that we need to address. A ray will attempt to accurately calculate the
intersection point when it intersects with a surface. Unfortunately for us, this calculation is
susceptible to floating point rounding errors which can cause the intersection point to be ever so
slightly off. This means that the origin of the next ray, the ray that is randomly scattered off of
the surface, is unlikely to be perfectly flush with the surface. It might be just above the surface.
It might be just below the surface. If the ray's origin is just below the surface then it could
intersect with that surface again. Which means that it will find the nearest surface at
$t=0.00000001$ or whatever floating point approximation the hit function gives us. The simplest hack
to address this is just to ignore hits that are very close to the calculated intersection point:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
...
private:
...
color ray_color(const ray& r, int depth, const hittable& world) const {
hit_record rec;
// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
return color(0,0,0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (world.hit(r, interval(0.001, infinity), rec)) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 direction = random_on_hemisphere(rec.normal);
return 0.5 * ray_color(ray(rec.p, direction), depth-1, world);
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [reflect-tolerance]: [camera.h]
Calculating reflected ray origins with tolerance]
This gets rid of the shadow acne problem. Yes it is really called that. Here's the result:
![Image 9: Diffuse sphere with no shadow acne
](../images/img-1.09-no-acne.png class='pixel')
True Lambertian Reflection
---------------------------
Scattering reflected rays evenly about the hemisphere produces a nice soft diffuse model, but we can
definitely do better.
A more accurate representation of real diffuse objects is the _Lambertian_ distribution.
This distribution scatters reflected rays in a manner that is proportional to $\cos (\phi)$, where
$\phi$ is the angle between the reflected ray and the surface normal.
This means that a reflected ray is most likely to scatter in a direction near the surface normal,
and less likely to scatter in directions away from the normal.
This non-uniform Lambertian distribution does a better job of modeling material reflection in the
real world than our previous uniform scattering.
We can create this distribution by adding a random unit vector to the normal vector.
At the point of intersection on a surface there is the hit point, $\mathbf{p}$, and there is the
normal of the surface, $\mathbf{n}$. At the point of intersection, this surface has exactly two
sides, so there can only be two unique unit spheres tangent to any intersection point (one
unique sphere for each side of the surface). These two unit spheres will be displaced
from the surface by the length of their radius, which is exactly one for a unit sphere.
One sphere will be displaced in the direction of the surface's normal ($\mathbf{n}$) and one sphere
will be displaced in the opposite direction ($\mathbf{-n}$). This leaves us with two spheres of unit
size that will only be _just_ touching the surface at the intersection point. From this, one of the
spheres will have its center at $(\mathbf{P} + \mathbf{n})$ and the other sphere will have its
center at $(\mathbf{P} - \mathbf{n})$. The sphere with a center at $(\mathbf{P} - \mathbf{n})$ is
considered _inside_ the surface, whereas the sphere with center $(\mathbf{P} + \mathbf{n})$ is
considered _outside_ the surface.
We want to select the tangent unit sphere that is on the same side of the surface as the ray
origin. Pick a random point $\mathbf{S}$ on this unit radius sphere and send a ray from the hit
point $\mathbf{P}$ to the random point $\mathbf{S}$ (this is the vector $(\mathbf{S}-\mathbf{P})$):
![Figure [rand-unitvec]: Randomly generating a vector according to Lambertian distribution
](../images/fig-1.14-rand-unitvec.jpg)
The change is actually fairly minimal:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
...
color ray_color(const ray& r, int depth, const hittable& world) const {
hit_record rec;
// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
return color(0,0,0);
if (world.hit(r, interval(0.001, infinity), rec)) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 direction = rec.normal + random_unit_vector();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return 0.5 * ray_color(ray(rec.p, direction), depth-1, world);
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-unit-sphere]: [camera.h] ray_color() with replacement diffuse]
After rendering we get a similar image:
![Image 10: Correct rendering of Lambertian spheres
](../images/img-1.10-correct-lambertian.png class='pixel')
It's hard to tell the difference between these two diffuse methods, given that our scene of two
spheres is so simple, but you should be able to notice two important visual differences:
1. The shadows are more pronounced after the change
2. Both spheres are tinted blue from the sky after the change
Both of these changes are due to the less uniform scattering of the light rays--more rays are
scattering toward the normal. This means that for diffuse objects, they will appear _darker_
because less light bounces toward the camera. For the shadows, more light bounces straight-up, so
the area underneath the sphere is darker.
Not a lot of common, everyday objects are perfectly diffuse, so our visual intuition of how these
objects behave under light can be poorly formed. As scenes become more complicated over the course
of the book, you are encouraged to switch between the different diffuse renderers presented here.
Most scenes of interest will contain a large amount of diffuse materials. You can gain valuable
insight by understanding the effect of different diffuse methods on the lighting of a scene.
Using Gamma Correction for Accurate Color Intensity
----------------------------------------------------
Note the shadowing under the sphere. The picture is very dark, but our spheres only absorb half the
energy of each bounce, so they are 50% reflectors. The spheres should look pretty bright (in real
life, a light grey) but they appear to be rather dark. We can see this more clearly if we walk
through the full brightness gamut for our diffuse material. We start by setting the reflectance of
the `ray_color` function from `0.5` (50%) to `0.1` (10%):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
...
color ray_color(const ray& r, int depth, const hittable& world) const {
hit_record rec;
// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
return color(0,0,0);
if (world.hit(r, interval(0.001, infinity), rec)) {
vec3 direction = rec.normal + random_unit_vector();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
return 0.1 * ray_color(ray(rec.p, direction), depth-1, world);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-gamut]: [camera.h] ray_color() with 10% reflectance]
We render out at this new 10% reflectance. We then set reflectance to 30% and render again. We
repeat for 50%, 70%, and finally 90%. You can overlay these images from left to right in the photo
editor of your choice and you should get a very nice visual representation of the increasing
brightness of your chosen gamut. This is the one that we've been working with so far:
![Image 11: The gamut of our renderer so far
](../images/img-1.11-linear-gamut.png class='pixel')
If you look closely, or if you use a color picker, you should notice that the 50% reflectance
render (the one in the middle) is far too dark to be half-way between white and black (middle-gray).
Indeed, the 70% reflector is closer to middle-gray. The reason for this is that almost all
computer programs assume that an image is “gamma corrected” before being written into an image
file. This means that the 0 to 1 values have some transform applied before being stored as a byte.
Images with data that are written without being transformed are said to be in _linear space_,
whereas images that are transformed are said to be in _gamma space_. It is likely that the image
viewer you are using is expecting an image in gamma space, but we are giving it an image in linear
space. This is the reason why our image appears inaccurately dark.
There are many good reasons for why images should be stored in gamma space, but for our purposes we
just need to be aware of it. We are going to transform our data into gamma space so that our image
viewer can more accurately display our image. As a simple approximation, we can use “gamma 2” as our
transform, which is the power that you use when going from gamma space to linear space. We need to
go from linear space to gamma space, which means taking the inverse of "gamma 2", which means an
exponent of $1/\mathit{gamma}$, which is just the square-root.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
inline double linear_to_gamma(double linear_component)
{
return sqrt(linear_component);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
void write_color(std::ostream &out, color pixel_color, int samples_per_pixel) {
auto r = pixel_color.x();
auto g = pixel_color.y();
auto b = pixel_color.z();
// Divide the color by the number of samples.
auto scale = 1.0 / samples_per_pixel;
r *= scale;
g *= scale;
b *= scale;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
// Apply the linear to gamma transform.
r = linear_to_gamma(r);
g = linear_to_gamma(g);
b = linear_to_gamma(b);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
// Write the translated [0,255] value of each color component.
static const interval intensity(0.000, 0.999);
out << static_cast(256 * intensity.clamp(r)) << ' '
<< static_cast(256 * intensity.clamp(g)) << ' '
<< static_cast(256 * intensity.clamp(b)) << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [write-color-gamma]: [color.h] write_color(), with gamma correction]
Using this gamma correction, we now get a much more consistent ramp from darkness to lightness:
![Image 12: The gamma-corrected render of two diffuse spheres
](../images/img-1.12-gamma-gamut.png class='pixel')
Metal
====================================================================================================
An Abstract Class for Materials
--------------------------------
If we want different objects to have different materials, we have a design decision. We could have
a universal material type with lots of parameters so any individual material type could just ignore
the parameters that don't affect it. This is not a bad approach. Or we could have an abstract
material class that encapsulates unique behavior. I am a fan of the latter approach. For our
program the material needs to do two things:
1. Produce a scattered ray (or say it absorbed the incident ray).
2. If scattered, say how much the ray should be attenuated.
This suggests the abstract class:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#ifndef MATERIAL_H
#define MATERIAL_H
#include "rtweekend.h"
class hit_record;
class material {
public:
virtual ~material() = default;
virtual bool scatter(
const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered) const = 0;
};
#endif
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [material-initial]: [material.h] The material class]
A Data Structure to Describe Ray-Object Intersections
------------------------------------------------------
The `hit_record` is to avoid a bunch of arguments so we can stuff whatever info we want in there.
You can use arguments instead of an encapsulated type, it’s just a matter of taste. Hittables and
materials need to be able to reference the other's type in code so there is some circularity of the
references. In C++ we add the line `class material;` to tell the compiler that `material` is a class
that will be defined later. Since we're just specifying a pointer to the class, the compiler
doesn't need to know the details of the class, solving the circular reference issue.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "rtweekend.h"
class material;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class hit_record {
public:
point3 p;
vec3 normal;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
shared_ptr mat;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
double t;
bool front_face;
void set_face_normal(const ray& r, const vec3& outward_normal) {
front_face = dot(r.direction(), outward_normal) < 0;
normal = front_face ? outward_normal : -outward_normal;
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [hit-with-material]: [hittable.h] Hit record with added material pointer]
`hit_record` is just a way to stuff a bunch of arguments into a class so we can send them as a
group. When a ray hits a surface (a particular sphere for example), the material pointer in the
`hit_record` will be set to point at the material pointer the sphere was given when it was set up in
`main()` when we start. When the `ray_color()` routine gets the `hit_record` it can call member
functions of the material pointer to find out what ray, if any, is scattered.
To achieve this, `hit_record` needs to be told the material that is assigned to the sphere.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class sphere : public hittable {
public:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
sphere(point3 _center, double _radius, shared_ptr _material)
: center(_center), radius(_radius), mat(_material) {}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool hit(const ray& r, interval ray_t, hit_record& rec) const override {
...
rec.t = root;
rec.p = r.at(rec.t);
vec3 outward_normal = (rec.p - center) / radius;
rec.set_face_normal(r, outward_normal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
rec.mat = mat;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return true;
}
private:
point3 center;
double radius;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
shared_ptr mat;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [sphere-material]: [sphere.h]
Ray-sphere intersection with added material information]
Modeling Light Scatter and Reflectance
---------------------------------------
For the Lambertian (diffuse) case we already have, it can either always scatter and attenuate by
its reflectance $R$, or it can sometimes scatter (with probabilty $1-R$) with no attenuation (where
a ray that isn't scattered is just absorbed into the material). It could also be a mixture of both
those strategies. We will choose to always scatter, so Lambertian materials become this simple class:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class material {
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
class lambertian : public material {
public:
lambertian(const color& a) : albedo(a) {}
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
const override {
auto scatter_direction = rec.normal + random_unit_vector();
scattered = ray(rec.p, scatter_direction);
attenuation = albedo;
return true;
}
private:
color albedo;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [lambertian-initial]: [material.h] The new lambertian material class]
Note the third option that we could scatter with some fixed probability $p$ and have attenuation be
$\mathit{albedo}/p$. Your choice.
If you read the code above carefully, you'll notice a small chance of mischief. If the random unit
vector we generate is exactly opposite the normal vector, the two will sum to zero, which will
result in a zero scatter direction vector. This leads to bad scenarios later on (infinities and
NaNs), so we need to intercept the condition before we pass it on.
In service of this, we'll create a new vector method -- `vec3::near_zero()` -- that returns true if
the vector is very close to zero in all dimensions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class vec3 {
...
double length_squared() const {
return e[0]*e[0] + e[1]*e[1] + e[2]*e[2];
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
bool near_zero() const {
// Return true if the vector is close to zero in all dimensions.
auto s = 1e-8;
return (fabs(e[0]) < s) && (fabs(e[1]) < s) && (fabs(e[2]) < s);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [vec3-near-zero]: [vec3.h] The vec3::near_zero() method]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class lambertian : public material {
public:
lambertian(const color& a) : albedo(a) {}
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
const override {
auto scatter_direction = rec.normal + random_unit_vector();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
// Catch degenerate scatter direction
if (scatter_direction.near_zero())
scatter_direction = rec.normal;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
scattered = ray(rec.p, scatter_direction);
attenuation = albedo;
return true;
}
private:
color albedo;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [lambertian-catch-zero]: [material.h] Lambertian scatter, bullet-proof]
Mirrored Light Reflection
--------------------------
For polished metals the ray won’t be randomly scattered. The key question is: How does a ray get
reflected from a metal mirror? Vector math is our friend here:
![Figure [reflection]: Ray reflection](../images/fig-1.15-reflection.jpg)
The reflected ray direction in red is just $\mathbf{v} + 2\mathbf{b}$. In our design, $\mathbf{n}$
is a unit vector, but $\mathbf{v}$ may not be. The length of $\mathbf{b}$ should be $\mathbf{v}
\cdot \mathbf{n}$. Because $\mathbf{v}$ points in, we will need a minus sign, yielding:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
inline vec3 random_on_hemisphere(const vec3& normal) {
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
vec3 reflect(const vec3& v, const vec3& n) {
return v - 2*dot(v,n)*n;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [vec3-reflect]: [vec3.h] vec3 reflection function]
The metal material just reflects rays using that formula:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
class lambertian : public material {
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
class metal : public material {
public:
metal(const color& a) : albedo(a) {}
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
const override {
vec3 reflected = reflect(unit_vector(r_in.direction()), rec.normal);
scattered = ray(rec.p, reflected);
attenuation = albedo;
return true;
}
private:
color albedo;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [metal-material]: [material.h] Metal material with reflectance function]
We need to modify the `ray_color()` function for all of our changes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
#include "rtweekend.h"
#include "color.h"
#include "hittable.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "material.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
class camera {
...
private:
...
color ray_color(const ray& r, int depth, const hittable& world) const {
hit_record rec;
// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
return color(0,0,0);
if (world.hit(r, interval(0.001, infinity), rec)) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
ray scattered;
color attenuation;
if (rec.mat->scatter(r, rec, attenuation, scattered))
return attenuation * ray_color(scattered, depth-1, world);
return color(0,0,0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
vec3 unit_direction = unit_vector(r.direction());
auto a = 0.5*(unit_direction.y() + 1.0);
return (1.0-a)*color(1.0, 1.0, 1.0) + a*color(0.5, 0.7, 1.0);
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-scatter]: [camera.h] Ray color with scattered reflectance]
A Scene with Metal Spheres
---------------------------
Now let’s add some metal spheres to our scene:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include "camera.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "color.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "hittable_list.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
#include "material.h"
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "sphere.h"
int main() {
hittable_list world;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto material_ground = make_shared(color(0.8, 0.8, 0.0));
auto material_center = make_shared(color(0.7, 0.3, 0.3));
auto material_left = make_shared(color(0.8, 0.8, 0.8));
auto material_right = make_shared(color(0.8, 0.6, 0.2));
world.add(make_shared(point3( 0.0, -100.5, -1.0), 100.0, material_ground));
world.add(make_shared(point3( 0.0, 0.0, -1.0), 0.5, material_center));
world.add(make_shared(point3(-1.0, 0.0, -1.0), 0.5, material_left));
world.add(make_shared(point3( 1.0, 0.0, -1.0), 0.5, material_right));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
camera cam;
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 400;
cam.samples_per_pixel = 100;
cam.max_depth = 50;
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-with-metal]: [main.cc] Scene with metal spheres]
Which gives:
![Image 13: Shiny metal
](../images/img-1.13-metal-shiny.png class='pixel')
Fuzzy Reflection
-----------------
We can also randomize the reflected direction by using a small sphere and choosing a new endpoint
for the ray.
We'll use a random point from the surface of a sphere centered on the original endpoint, scaled by
the fuzz factor.
![Figure [reflect-fuzzy]: Generating fuzzed reflection rays](../images/fig-1.16-reflect-fuzzy.jpg)
The bigger the sphere, the fuzzier the reflections will be. This suggests adding a fuzziness
parameter that is just the radius of the sphere (so zero is no perturbation). The catch is that for
big spheres or grazing rays, we may scatter below the surface. We can just have the surface
absorb those.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class metal : public material {
public:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
metal(const color& a, double f) : albedo(a), fuzz(f < 1 ? f : 1) {}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
const override {
vec3 reflected = reflect(unit_vector(r_in.direction()), rec.normal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
scattered = ray(rec.p, reflected + fuzz*random_unit_vector());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
attenuation = albedo;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
return (dot(scattered.direction(), rec.normal) > 0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
private:
color albedo;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double fuzz;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [metal-fuzz]: [material.h] Metal material fuzziness]
We can try that out by adding fuzziness 0.3 and 1.0 to the metals:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
...
auto material_ground = make_shared(color(0.8, 0.8, 0.0));
auto material_center = make_shared(color(0.7, 0.3, 0.3));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto material_left = make_shared(color(0.8, 0.8, 0.8), 0.3);
auto material_right = make_shared(color(0.8, 0.6, 0.2), 1.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [metal-fuzz-spheres]: [main.cc] Metal spheres with fuzziness]
![Image 14: Fuzzed metal
](../images/img-1.14-metal-fuzz.png class='pixel')
Dielectrics
====================================================================================================
Clear materials such as water, glass, and diamond are dielectrics. When a light ray hits them, it
splits into a reflected ray and a refracted (transmitted) ray. We’ll handle that by randomly
choosing between reflection and refraction, only generating one scattered ray per interaction.
Refraction
-----------
The hardest part to debug is the refracted ray. I usually first just have all the light refract if
there is a refraction ray at all. For this project, I tried to put two glass balls in our scene, and
I got this (I have not told you how to do this right or wrong yet, but soon!):
![Image 15: Glass first
](../images/img-1.15-glass-first.png class='pixel')
Is that right? Glass balls look odd in real life. But no, it isn’t right. The world should be
flipped upside down and no weird black stuff. I just printed out the ray straight through the middle
of the image and it was clearly wrong. That often does the job.
Snell's Law
------------
The refraction is described by Snell’s law:
$$ \eta \cdot \sin\theta = \eta' \cdot \sin\theta' $$
Where $\theta$ and $\theta'$ are the angles from the normal, and $\eta$ and $\eta'$ (pronounced
"eta" and "eta prime") are the refractive indices (typically air = 1.0, glass = 1.3–1.7, diamond =
2.4). The geometry is:
![Figure [refraction]: Ray refraction](../images/fig-1.17-refraction.jpg)
In order to determine the direction of the refracted ray, we have to solve for $\sin\theta'$:
$$ \sin\theta' = \frac{\eta}{\eta'} \cdot \sin\theta $$
On the refracted side of the surface there is a refracted ray $\mathbf{R'}$ and a normal
$\mathbf{n'}$, and there exists an angle, $\theta'$, between them. We can split $\mathbf{R'}$ into
the parts of the ray that are perpendicular to $\mathbf{n'}$ and parallel to $\mathbf{n'}$:
$$ \mathbf{R'} = \mathbf{R'}_{\bot} + \mathbf{R'}_{\parallel} $$
If we solve for $\mathbf{R'}_{\bot}$ and $\mathbf{R'}_{\parallel}$ we get:
$$ \mathbf{R'}_{\bot} = \frac{\eta}{\eta'} (\mathbf{R} + \cos\theta \mathbf{n}) $$
$$ \mathbf{R'}_{\parallel} = -\sqrt{1 - |\mathbf{R'}_{\bot}|^2} \mathbf{n} $$
You can go ahead and prove this for yourself if you want, but we will treat it as fact and move on.
The rest of the book will not require you to understand the proof.
We know the value of every term on the right-hand side except for $\cos\theta$. It is well known
that the dot product of two vectors can be explained in terms of the cosine of the angle between
them:
$$ \mathbf{a} \cdot \mathbf{b} = |\mathbf{a}| |\mathbf{b}| \cos\theta $$
If we restrict $\mathbf{a}$ and $\mathbf{b}$ to be unit vectors:
$$ \mathbf{a} \cdot \mathbf{b} = \cos\theta $$
We can now rewrite $\mathbf{R'}_{\bot}$ in terms of known quantities:
$$ \mathbf{R'}_{\bot} =
\frac{\eta}{\eta'} (\mathbf{R} + (\mathbf{-R} \cdot \mathbf{n}) \mathbf{n}) $$
When we combine them back together, we can write a function to calculate $\mathbf{R'}$:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
inline vec3 reflect(const vec3& v, const vec3& n) {
return v - 2*dot(v,n)*n;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
inline vec3 refract(const vec3& uv, const vec3& n, double etai_over_etat) {
auto cos_theta = fmin(dot(-uv, n), 1.0);
vec3 r_out_perp = etai_over_etat * (uv + cos_theta*n);
vec3 r_out_parallel = -sqrt(fabs(1.0 - r_out_perp.length_squared())) * n;
return r_out_perp + r_out_parallel;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [refract]: [vec3.h] Refraction function]
And the dielectric material that always refracts is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
class metal : public material {
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
class dielectric : public material {
public:
dielectric(double index_of_refraction) : ir(index_of_refraction) {}
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
const override {
attenuation = color(1.0, 1.0, 1.0);
double refraction_ratio = rec.front_face ? (1.0/ir) : ir;
vec3 unit_direction = unit_vector(r_in.direction());
vec3 refracted = refract(unit_direction, rec.normal, refraction_ratio);
scattered = ray(rec.p, refracted);
return true;
}
private:
double ir; // Index of Refraction
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric-always-refract]: [material.h]
Dielectric material class that always refracts]
Now we'll update the scene to change the left and center spheres to glass:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto material_ground = make_shared(color(0.8, 0.8, 0.0));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto material_center = make_shared(1.5);
auto material_left = make_shared(1.5);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto material_right = make_shared(color(0.8, 0.6, 0.2), 1.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [two-glass]: [main.cc] Changing left and center spheres to glass]
This gives us the following result:
![Image 16: Glass sphere that always refracts
](../images/img-1.16-glass-always-refract.png class='pixel')
Total Internal Reflection
--------------------------
That definitely doesn't look right. One troublesome practical issue is that when the ray is in the
material with the higher refractive index, there is no real solution to Snell’s law, and thus there
is no refraction possible. If we refer back to Snell's law and the derivation of $\sin\theta'$:
$$ \sin\theta' = \frac{\eta}{\eta'} \cdot \sin\theta $$
If the ray is inside glass and outside is air ($\eta = 1.5$ and $\eta' = 1.0$):
$$ \sin\theta' = \frac{1.5}{1.0} \cdot \sin\theta $$
The value of $\sin\theta'$ cannot be greater than 1. So, if,
$$ \frac{1.5}{1.0} \cdot \sin\theta > 1.0 $$
the equality between the two sides of the equation is broken, and a solution cannot exist. If a
solution does not exist, the glass cannot refract, and therefore must reflect the ray:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
if (refraction_ratio * sin_theta > 1.0) {
// Must Reflect
...
} else {
// Can Refract
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric-can-refract-1]: [material.h] Determining if the ray can refract]
Here all the light is reflected, and because in practice that is usually inside solid objects, it
is called “total internal reflection”. This is why sometimes the water-air boundary acts as a
perfect mirror when you are submerged.
We can solve for `sin_theta` using the trigonometric qualities:
$$ \sin\theta = \sqrt{1 - \cos^2\theta} $$
and
$$ \cos\theta = \mathbf{R} \cdot \mathbf{n} $$
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
double cos_theta = fmin(dot(-unit_direction, rec.normal), 1.0);
double sin_theta = sqrt(1.0 - cos_theta*cos_theta);
if (refraction_ratio * sin_theta > 1.0) {
// Must Reflect
...
} else {
// Can Refract
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric-can-refract-2]: [material.h] Determining if the ray can refract]
And the dielectric material that always refracts (when possible) is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class dielectric : public material {
public:
dielectric(double index_of_refraction) : ir(index_of_refraction) {}
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
const override {
attenuation = color(1.0, 1.0, 1.0);
double refraction_ratio = rec.front_face ? (1.0/ir) : ir;
vec3 unit_direction = unit_vector(r_in.direction());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double cos_theta = fmin(dot(-unit_direction, rec.normal), 1.0);
double sin_theta = sqrt(1.0 - cos_theta*cos_theta);
bool cannot_refract = refraction_ratio * sin_theta > 1.0;
vec3 direction;
if (cannot_refract)
direction = reflect(unit_direction, rec.normal);
else
direction = refract(unit_direction, rec.normal, refraction_ratio);
scattered = ray(rec.p, direction);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return true;
}
private:
double ir; // Index of Refraction
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [dielectric-with-refraction]: [material.h]
Dielectric material class with reflection]
Attenuation is always 1 -- the glass surface absorbs nothing. If we try that out with these
parameters:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
auto material_ground = make_shared(color(0.8, 0.8, 0.0));
auto material_center = make_shared(color(0.1, 0.2, 0.5));
auto material_left = make_shared(1.5);
auto material_right = make_shared(color(0.8, 0.6, 0.2), 0.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-dielectric]: [main.cc] Scene with dielectric and shiny sphere]
We get:
![Image 17: Glass sphere that sometimes refracts
](../images/img-1.17-glass-sometimes-refract.png class='pixel')
Schlick Approximation
----------------------
Now real glass has reflectivity that varies with angle -- look at a window at a steep angle and it
becomes a mirror. There is a big ugly equation for that, but almost everybody uses a cheap and
surprisingly accurate polynomial approximation by Christophe Schlick. This yields our full glass
material:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class dielectric : public material {
public:
dielectric(double index_of_refraction) : ir(index_of_refraction) {}
bool scatter(const ray& r_in, const hit_record& rec, color& attenuation, ray& scattered)
const override {
attenuation = color(1.0, 1.0, 1.0);
double refraction_ratio = rec.front_face ? (1.0/ir) : ir;
vec3 unit_direction = unit_vector(r_in.direction());
double cos_theta = fmin(dot(-unit_direction, rec.normal), 1.0);
double sin_theta = sqrt(1.0 - cos_theta*cos_theta);
bool cannot_refract = refraction_ratio * sin_theta > 1.0;
vec3 direction;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
if (cannot_refract || reflectance(cos_theta, refraction_ratio) > random_double())
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
direction = reflect(unit_direction, rec.normal);
else
direction = refract(unit_direction, rec.normal, refraction_ratio);
scattered = ray(rec.p, direction);
return true;
}
private:
double ir; // Index of Refraction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
static double reflectance(double cosine, double ref_idx) {
// Use Schlick's approximation for reflectance.
auto r0 = (1-ref_idx) / (1+ref_idx);
r0 = r0*r0;
return r0 + (1-r0)*pow((1 - cosine),5);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [glass]: [material.h] Full glass material]
Modeling a Hollow Glass Sphere
-------------------------------
An interesting and easy trick with dielectric spheres is to note that if you use a negative radius,
the geometry is unaffected, but the surface normal points inward. This can be used as a bubble to
make a hollow glass sphere:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
world.add(make_shared(point3( 0.0, -100.5, -1.0), 100.0, material_ground));
world.add(make_shared(point3( 0.0, 0.0, -1.0), 0.5, material_center));
world.add(make_shared(point3(-1.0, 0.0, -1.0), 0.5, material_left));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
world.add(make_shared(point3(-1.0, 0.0, -1.0), -0.4, material_left));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
world.add(make_shared(point3( 1.0, 0.0, -1.0), 0.5, material_right));
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-hollow-glass]: [main.cc] Scene with hollow glass sphere]
This gives:
![Image 18: A hollow glass sphere
](../images/img-1.18-glass-hollow.png class='pixel')
Positionable Camera
====================================================================================================
Cameras, like dielectrics, are a pain to debug, so I always develop mine incrementally.
First, let’s allow for an adjustable field of view (_fov_).
This is the visual angle from edge to edge of the rendered image.
Since our image is not square, the fov is different horizontally and vertically.
I always use vertical fov.
I also usually specify it in degrees and change to radians inside a constructor -- a matter of
personal taste.
Camera Viewing Geometry
------------------------
First, we'll keep the rays coming from the origin and heading to the $z = -1$ plane. We could make
it the $z = -2$ plane, or whatever, as long as we made $h$ a ratio to that distance. Here is our
setup:
![Figure [cam-view-geom]: Camera viewing geometry (from the side)
](../images/fig-1.18-cam-view-geom.jpg)
This implies $h = \tan(\frac{\theta}{2})$. Our camera now becomes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
double aspect_ratio = 1.0; // Ratio of image width over height
int image_width = 100; // Rendered image width in pixel count
int samples_per_pixel = 10; // Count of random samples for each pixel
int max_depth = 10; // Maximum number of ray bounces into scene
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double vfov = 90; // Vertical view angle (field of view)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
void render(const hittable& world) {
...
private:
...
void initialize() {
image_height = static_cast(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
center = point3(0, 0, 0);
// Determine viewport dimensions.
auto focal_length = 1.0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto theta = degrees_to_radians(vfov);
auto h = tan(theta/2);
auto viewport_height = 2 * h * focal_length;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto viewport_width = viewport_height * (static_cast(image_width)/image_height);
// Calculate the vectors across the horizontal and down the vertical viewport edges.
auto viewport_u = vec3(viewport_width, 0, 0);
auto viewport_v = vec3(0, -viewport_height, 0);
// Calculate the horizontal and vertical delta vectors from pixel to pixel.
pixel_delta_u = viewport_u / image_width;
pixel_delta_v = viewport_v / image_height;
// Calculate the location of the upper left pixel.
auto viewport_upper_left =
center - vec3(0, 0, focal_length) - viewport_u/2 - viewport_v/2;
pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
}
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-fov]: [camera.h] Camera with adjustable field-of-view (fov)]
We'll test out these changes with a simple scene of two touching spheres, using a 90° field of view.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
hittable_list world;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto R = cos(pi/4);
auto material_left = make_shared(color(0,0,1));
auto material_right = make_shared(color(1,0,0));
world.add(make_shared(point3(-R, 0, -1), R, material_left));
world.add(make_shared(point3( R, 0, -1), R, material_right));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
camera cam;
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 400;
cam.samples_per_pixel = 100;
cam.max_depth = 50;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
cam.vfov = 90;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-wide-angle]: [main.cc] Scene with wide-angle camera]
This gives us the rendering:
![Image 19: A wide-angle view
](../images/img-1.19-wide-view.png class='pixel')
Positioning and Orienting the Camera
-------------------------------------
To get an arbitrary viewpoint, let’s first name the points we care about. We’ll call the position
where we place the camera _lookfrom_, and the point we look at _lookat_. (Later, if you want, you
could define a direction to look in instead of a point to look at.)
We also need a way to specify the roll, or sideways tilt, of the camera: the rotation around the
lookat-lookfrom axis.
Another way to think about it is that even if you keep `lookfrom` and `lookat` constant, you can
still rotate your head around your nose. What we need is a way to specify an “up” vector for the
camera.
![Figure [cam-view-dir]: Camera view direction](../images/fig-1.19-cam-view-dir.jpg)
We can specify any up vector we want, as long as it's not parallel to the view direction.
Project this up vector onto the plane orthogonal to the view direction to get a camera-relative up
vector.
I use the common convention of naming this the “view up” (_vup_) vector.
After a few cross products and vector normalizations, we now have a complete orthonormal basis
$(u,v,w)$ to describe our camera’s orientation.
$u$ will be the unit vector pointing to camera right, $v$ is the unit vector pointing to camera up,
$w$ is the unit vector pointing opposite the view direction (since we use right-hand coordinates),
and the camera center is at the origin.
![Figure [cam-view-up]: Camera view up direction](../images/fig-1.20-cam-view-up.jpg)
Like before, when our fixed camera faced $-Z$, our arbitrary view camera faces $-w$.
Keep in mind that we can -- but we don’t have to -- use world up $(0,1,0)$ to specify vup.
This is convenient and will naturally keep your camera horizontally level until you decide to
experiment with crazy camera angles.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
double aspect_ratio = 1.0; // Ratio of image width over height
int image_width = 100; // Rendered image width in pixel count
int samples_per_pixel = 10; // Count of random samples for each pixel
int max_depth = 10; // Maximum number of ray bounces into scene
double vfov = 90; // Vertical view angle (field of view)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
point3 lookfrom = point3(0,0,-1); // Point camera is looking from
point3 lookat = point3(0,0,0); // Point camera is looking at
vec3 vup = vec3(0,1,0); // Camera-relative "up" direction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
private:
int image_height; // Rendered image height
point3 center; // Camera center
point3 pixel00_loc; // Location of pixel 0, 0
vec3 pixel_delta_u; // Offset to pixel to the right
vec3 pixel_delta_v; // Offset to pixel below
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 u, v, w; // Camera frame basis vectors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
void initialize() {
image_height = static_cast(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
center = lookfrom;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
// Determine viewport dimensions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto focal_length = (lookfrom - lookat).length();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto theta = degrees_to_radians(vfov);
auto h = tan(theta/2);
auto viewport_height = 2 * h * focal_length;
auto viewport_width = viewport_height * (static_cast(image_width)/image_height);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
// Calculate the u,v,w unit basis vectors for the camera coordinate frame.
w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
// Calculate the vectors across the horizontal and down the vertical viewport edges.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 viewport_u = viewport_width * u; // Vector across viewport horizontal edge
vec3 viewport_v = viewport_height * -v; // Vector down viewport vertical edge
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
// Calculate the horizontal and vertical delta vectors from pixel to pixel.
pixel_delta_u = viewport_u / image_width;
pixel_delta_v = viewport_v / image_height;
// Calculate the location of the upper left pixel.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto viewport_upper_left = center - (focal_length * w) - viewport_u/2 - viewport_v/2;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
}
...
private:
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-orient]: [camera.h] Positionable and orientable camera]
We'll change back to the prior scene, and use the new viewpoint:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
hittable_list world;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto material_ground = make_shared(color(0.8, 0.8, 0.0));
auto material_center = make_shared(color(0.1, 0.2, 0.5));
auto material_left = make_shared(1.5);
auto material_right = make_shared(color(0.8, 0.6, 0.2), 0.0);
world.add(make_shared(point3( 0.0, -100.5, -1.0), 100.0, material_ground));
world.add(make_shared(point3( 0.0, 0.0, -1.0), 0.5, material_center));
world.add(make_shared(point3(-1.0, 0.0, -1.0), 0.5, material_left));
world.add(make_shared(point3(-1.0, 0.0, -1.0), -0.4, material_left));
world.add(make_shared(point3( 1.0, 0.0, -1.0), 0.5, material_right));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
camera cam;
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 400;
cam.samples_per_pixel = 100;
cam.max_depth = 50;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
cam.vfov = 90;
cam.lookfrom = point3(-2,2,1);
cam.lookat = point3(0,0,-1);
cam.vup = vec3(0,1,0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-free-view]: [main.cc] Scene with alternate viewpoint]
to get:
![Image 20: A distant view
](../images/img-1.20-view-distant.png class='pixel')
And we can change field of view:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
cam.vfov = 20;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [change-field-view]: [main.cc] Change field of view]
to get:
![Image 21: Zooming in](../images/img-1.21-view-zoom.png class='pixel')
Defocus Blur
====================================================================================================
Now our final feature: _defocus blur_.
Note, photographers call this _depth of field_, so be sure to only use the term _defocus blur_ among
your raytracing friends.
The reason we have defocus blur in real cameras is because they need a big hole (rather than just a
pinhole) through which to gather light.
A large hole would defocus everything, but if we stick a lens in front of the film/sensor, there
will be a certain distance at which everything is in focus.
Objects placed at that distance will appear in focus and will linearly appear blurrier the further
they are from that distance.
You can think of a lens this way: all light rays coming _from_ a specific point at the focus
distance -- and that hit the lens -- will be bent back _to_ a single point on the image sensor.
We call the distance between the camera center and the plane where everything is in perfect focus
the _focus distance_.
Be aware that the focus distance is not usually the same as the focal length -- the _focal length_
is the distance between the camera center and the image plane.
For our model, however, these two will have the same value, as we will put our pixel grid right on
the focus plane, which is _focus distance_ away from the camera center.
In a physical camera, the focus distance is controlled by the distance between the lens and the
film/sensor.
That is why you see the lens move relative to the camera when you change what is in focus (that may
happen in your phone camera too, but the sensor moves).
The “aperture” is a hole to control how big the lens is effectively.
For a real camera, if you need more light you make the aperture bigger, and will get more blur for
objects away from the focus distance.
For our virtual camera, we can have a perfect sensor and never need more light, so we only use an
aperture when we want defocus blur.
A Thin Lens Approximation
--------------------------
A real camera has a complicated compound lens. For our code, we could simulate the order: sensor,
then lens, then aperture. Then we could figure out where to send the rays, and flip the image after
it's computed (the image is projected upside down on the film). Graphics people, however, usually
use a thin lens approximation:
![Figure [cam-lens]: Camera lens model](../images/fig-1.21-cam-lens.jpg)
We don’t need to simulate any of the inside of the camera -- for the purposes of rendering an image
outside the camera, that would be unnecessary complexity.
Instead, I usually start rays from an infinitely thin circular "lens", and send them toward the
pixel of interest on the focus plane (`focal_length` away from the lens), where everything on that
plane in the 3D world is in perfect focus.
In practice, we accomplish this by placing the viewport in this plane.
Putting everything together:
1. The focus plane is orthogonal to the camera view direction.
2. The focus distance is the distance between the camera center and the focus plane.
3. The viewport lies on the focus plane, centered on the camera view direction vector.
4. The grid of pixel locations lies inside the viewport (located in the 3D world).
5. Random image sample locations are chosen from the region around the current pixel location.
6. The camera fires rays from random points on the lens through the current image sample location.
![Figure [cam-film-plane]: Camera focus plane](../images/fig-1.22-cam-film-plane.jpg)
Generating Sample Rays
-----------------------
Without defocus blur, all scene rays originate from the camera center (or `lookfrom`).
In order to accomplish defocus blur, we construct a disk centered at the camera center.
The larger the radius, the greater the defocus blur.
You can think of our original camera as having a defocus disk of radius zero (no blur at all), so
all rays originated at the disk center (`lookfrom`).
So, how large should the defocus disk be?
Since the size of this disk controls how much defocus blur we get, that should be a parameter of the
camera class.
We could just take the radius of the disk as a camera parameter, but the blur would vary depending
on the projection distance.
A slightly easier parameter is to specify the angle of the cone with apex at viewport center and
base (defocus disk) at the camera center.
This should give you more consistent results as you vary the focus distance for a given shot.
Since we'll be choosing random points from the defocus disk, we'll need a function to do that:
`random_in_unit_disk()`.
This function works using the same kind of method we use in `random_in_unit_sphere()`, just for two
dimensions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
inline vec3 unit_vector(vec3 u) {
return v / v.length();
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
inline vec3 random_in_unit_disk() {
while (true) {
auto p = vec3(random_double(-1,1), random_double(-1,1), 0);
if (p.length_squared() < 1)
return p;
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [rand-in-unit-disk]: [vec3.h] Generate random point inside unit disk]
Now let's update the camera to originate rays from the defocus disk:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
double aspect_ratio = 1.0; // Ratio of image width over height
int image_width = 100; // Rendered image width in pixel count
int samples_per_pixel = 10; // Count of random samples for each pixel
int max_depth = 10; // Maximum number of ray bounces into scene
double vfov = 90; // Vertical view angle (field of view)
point3 lookfrom = point3(0,0,-1); // Point camera is looking from
point3 lookat = point3(0,0,0); // Point camera is looking at
vec3 vup = vec3(0,1,0); // Camera-relative "up" direction
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double defocus_angle = 0; // Variation angle of rays through each pixel
double focus_dist = 10; // Distance from camera lookfrom point to plane of perfect focus
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
private:
int image_height; // Rendered image height
point3 center; // Camera center
point3 pixel00_loc; // Location of pixel 0, 0
vec3 pixel_delta_u; // Offset to pixel to the right
vec3 pixel_delta_v; // Offset to pixel below
vec3 u, v, w; // Camera frame basis vectors
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 defocus_disk_u; // Defocus disk horizontal radius
vec3 defocus_disk_v; // Defocus disk vertical radius
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
void initialize() {
image_height = static_cast(image_width / aspect_ratio);
image_height = (image_height < 1) ? 1 : image_height;
center = lookfrom;
// Determine viewport dimensions.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ delete
auto focal_length = (lookfrom - lookat).length();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto theta = degrees_to_radians(vfov);
auto h = tan(theta/2);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto viewport_height = 2 * h * focus_dist;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto viewport_width = viewport_height * (static_cast(image_width)/image_height);
// Calculate the u,v,w unit basis vectors for the camera coordinate frame.
w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);
// Calculate the vectors across the horizontal and down the vertical viewport edges.
vec3 viewport_u = viewport_width * u; // Vector across viewport horizontal edge
vec3 viewport_v = viewport_height * -v; // Vector down viewport vertical edge
// Calculate the horizontal and vertical delta vectors to the next pixel.
pixel_delta_u = viewport_u / image_width;
pixel_delta_v = viewport_v / image_height;
// Calculate the location of the upper left pixel.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto viewport_upper_left = center - (focus_dist * w) - viewport_u/2 - viewport_v/2;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
pixel00_loc = viewport_upper_left + 0.5 * (pixel_delta_u + pixel_delta_v);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
// Calculate the camera defocus disk basis vectors.
auto defocus_radius = focus_dist * tan(degrees_to_radians(defocus_angle / 2));
defocus_disk_u = u * defocus_radius;
defocus_disk_v = v * defocus_radius;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
ray get_ray(int i, int j) const {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
// Get a randomly-sampled camera ray for the pixel at location i,j, originating from
// the camera defocus disk.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto pixel_center = pixel00_loc + (i * pixel_delta_u) + (j * pixel_delta_v);
auto pixel_sample = pixel_center + pixel_sample_square();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto ray_origin = (defocus_angle <= 0) ? center : defocus_disk_sample();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto ray_direction = pixel_sample - ray_origin;
return ray(ray_origin, ray_direction);
}
...
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
point3 defocus_disk_sample() const {
// Returns a random point in the camera defocus disk.
auto p = random_in_unit_disk();
return center + (p[0] * defocus_disk_u) + (p[1] * defocus_disk_v);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
color ray_color(const ray& r, int depth, const hittable& world) const {
...
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [camera-dof]: [camera.h] Camera with adjustable depth-of-field (dof)]
Using a large aperture:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
...
camera cam;
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 400;
cam.samples_per_pixel = 100;
cam.max_depth = 50;
cam.vfov = 20;
cam.lookfrom = point3(-2,2,1);
cam.lookat = point3(0,0,-1);
cam.vup = vec3(0,1,0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
cam.defocus_angle = 10.0;
cam.focus_dist = 3.4;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-camera-dof]: [main.cc] Scene camera with depth-of-field]
We get:
![Image 22: Spheres with depth-of-field
](../images/img-1.22-depth-of-field.png class='pixel')
Where Next?
====================================================================================================
A Final Render
---------------
Let’s make the image on the cover of this book -- lots of random spheres.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
hittable_list world;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
auto ground_material = make_shared(color(0.5, 0.5, 0.5));
world.add(make_shared(point3(0,-1000,0), 1000, ground_material));
for (int a = -11; a < 11; a++) {
for (int b = -11; b < 11; b++) {
auto choose_mat = random_double();
point3 center(a + 0.9*random_double(), 0.2, b + 0.9*random_double());
if ((center - point3(4, 0.2, 0)).length() > 0.9) {
shared_ptr sphere_material;
if (choose_mat < 0.8) {
// diffuse
auto albedo = color::random() * color::random();
sphere_material = make_shared(albedo);
world.add(make_shared(center, 0.2, sphere_material));
} else if (choose_mat < 0.95) {
// metal
auto albedo = color::random(0.5, 1);
auto fuzz = random_double(0, 0.5);
sphere_material = make_shared(albedo, fuzz);
world.add(make_shared(center, 0.2, sphere_material));
} else {
// glass
sphere_material = make_shared(1.5);
world.add(make_shared(center, 0.2, sphere_material));
}
}
}
}
auto material1 = make_shared(1.5);
world.add(make_shared(point3(0, 1, 0), 1.0, material1));
auto material2 = make_shared(color(0.4, 0.2, 0.1));
world.add(make_shared(point3(-4, 1, 0), 1.0, material2));
auto material3 = make_shared(color(0.7, 0.6, 0.5), 0.0);
world.add(make_shared(point3(4, 1, 0), 1.0, material3));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
camera cam;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ Highlight
cam.aspect_ratio = 16.0 / 9.0;
cam.image_width = 1200;
cam.samples_per_pixel = 500;
cam.max_depth = 50;
cam.vfov = 20;
cam.lookfrom = point3(13,2,3);
cam.lookat = point3(0,0,0);
cam.vup = vec3(0,1,0);
cam.defocus_angle = 0.6;
cam.focus_dist = 10.0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
cam.render(world);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scene-final]: [main.cc] Final scene]
(Note that the code above differs slightly from the project sample code: the `samples_per_pixel` is
set to 500 above for a high-quality image that will take quite a while to render.
The sample code uses a value of 10 in the interest of reasonable run times while developing and
validating.)
This gives:
![Image 23: Final scene](../images/img-1.23-book1-final.jpg)
An interesting thing you might note is the glass balls don’t really have shadows which makes them
look like they are floating. This is not a bug -- you don’t see glass balls much in real life, where
they also look a bit strange, and indeed seem to float on cloudy days. A point on the big sphere
under a glass ball still has lots of light hitting it because the sky is re-ordered rather than
blocked.
Next Steps
-----------
You now have a cool ray tracer! What next?
1. Lights -- You can do this explicitly, by sending shadow rays to lights, or it can be done
implicitly by making some objects emit light, biasing scattered rays toward them, and then
downweighting those rays to cancel out the bias. Both work. I am in the minority in favoring
the latter approach.
2. Triangles -- Most cool models are in triangle form. The model I/O is the worst and almost
everybody tries to get somebody else’s code to do this.
3. Surface Textures -- This lets you paste images on like wall paper. Pretty easy and a good thing
to do.
4. Solid textures -- Ken Perlin has his code online. Andrew Kensler has some very cool info at his
blog.
5. Volumes and Media -- Cool stuff and will challenge your software architecture. I favor making
volumes have the hittable interface and probabilistically have intersections based on density.
Your rendering code doesn’t even have to know it has volumes with that method.
6. Parallelism -- Run $N$ copies of your code on $N$ cores with different random seeds. Average
the $N$ runs. This averaging can also be done hierarchically where $N/2$ pairs can be averaged
to get $N/4$ images, and pairs of those can be averaged. That method of parallelism should
extend well into the thousands of cores with very little coding.
Have fun, and please send me your cool images!
(insert acknowledgments.md.html here)
Citing This Book
====================================================================================================
Consistent citations make it easier to identify the source, location and versions of this work. If
you are citing this book, we ask that you try to use one of the following forms if possible.
Basic Data
-----------
- **Title (series)**: “Ray Tracing in One Weekend Series”
- **Title (book)**: “Ray Tracing in One Weekend”
- **Author**: Peter Shirley, Trevor David Black, Steve Hollasch
- **Version/Edition**: v4.0.0-alpha.1
- **Date**: 2023-08-06
- **URL (series)**: https://raytracing.github.io/
- **URL (book)**: https://raytracing.github.io/books/RayTracingInOneWeekend.html
Snippets
---------
### Markdown
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[_Ray Tracing in One Weekend_](https://raytracing.github.io/books/RayTracingInOneWeekend.html)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### HTML
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ray Tracing in One Weekend
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### LaTeX and BibTex
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~\cite{Shirley2023RTW1}
@misc{Shirley2023RTW1,
title = {Ray Tracing in One Weekend},
author = {Peter Shirley, Trevor David Black, Steve Hollasch},
year = {2023},
month = {August},
note = {\small \texttt{https://raytracing.github.io/books/RayTracingInOneWeekend.html}},
url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### BibLaTeX
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\usepackage{biblatex}
~\cite{Shirley2023RTW1}
@online{Shirley2023RTW1,
title = {Ray Tracing in One Weekend},
author = {Peter Shirley, Trevor David Black, Steve Hollasch},
year = {2023},
month = {August},
url = {https://raytracing.github.io/books/RayTracingInOneWeekend.html}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### IEEE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
“Ray Tracing in One Weekend.” raytracing.github.io/books/RayTracingInOneWeekend.html
(accessed MMM. DD, YYYY)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
### MLA:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Ray Tracing in One Weekend. raytracing.github.io/books/RayTracingInOneWeekend.html
Accessed DD MMM. YYYY.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Peter Shirley]: https://github.com/petershirley
[Steve Hollasch]: https://github.com/hollasch
[Trevor David Black]: https://github.com/trevordblack
[discussions]: https://github.com/RayTracing/raytracing.github.io/discussions/
[gfx-codex]: https://graphicscodex.com/
[repo]: https://github.com/RayTracing/raytracing.github.io/
[square-pixels]: https://www.researchgate.net/publication/244986797