Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,10 @@ Change Log -- Ray Tracing in One Weekend

# v3.1.1 (in progress)

### Common
- Change: Camera code improvements to make it more robust when any particular value changes. Also,
the code develops in a smoother series of iterations as the book progresses. (#536)

### _In One Weekend_
- Change: The C++ `<random>` version of `random_double()` no longer depends on `<functional>`
header.
Expand Down
169 changes: 102 additions & 67 deletions books/RayTracingInOneWeekend.html
Original file line number Diff line number Diff line change
Expand Up @@ -466,12 +466,22 @@
that returns the color of the background (a simple gradient).

I’ve often gotten into trouble using square images for debugging because I transpose $x$ and $y$ too
often, so I’ll stick with a 200×100 image. I’ll put the “eye” (or camera center if you think of a
camera) at $(0,0,0)$. I will have the y-axis go up, and the x-axis to the right. In order to respect
the convention of a right handed coordinate system, into the screen is the negative z-axis. I will
traverse the screen from the lower left hand corner, and use two offset vectors along the screen
sides to move the ray endpoint across the screen. Note that I do not make the ray direction a unit
length vector because I think not doing that makes for simpler and slightly faster code.
often, so I’ll use a non-square image. For now we'll use a 16:9 aspect ratio, since that's so
common.

In addition to setting up the pixel dimensions for the rendered image, we also need to set up a
virtual viewport through which to pass our scene rays. For the standard square pixel spacing, the
viewport's aspect ratio should be the same as our rendered image. We'll just pick a viewport two
units in height. We'll also set the distance between the projection plane and the projection point
to be one unit. This is referred to as the “focal length”, not to be confused with “focus distance”,
which we'll present later.

I’ll put the “eye” (or camera center if you think of a camera) at $(0,0,0)$. I will have the y-axis
go up, and the x-axis to the right. In order to respect the convention of a right handed coordinate
system, into the screen is the negative z-axis. I will traverse the screen from the lower left hand
corner, and use two offset vectors along the screen sides to move the ray endpoint across the
screen. Note that I do not make the ray direction a unit length vector because I think not doing
that makes for simpler and slightly faster code.

![Figure [cam-geom]: Camera geometry](../images/fig.cam-geom.jpg)

Expand All @@ -497,19 +507,25 @@

std::cout << "P3\n" << image_width << " " << image_height << "\n255\n";


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
point3 origin(0.0, 0.0, 0.0);
vec3 horizontal(4.0, 0.0, 0.0);
vec3 vertical(0.0, 2.25, 0.0);
point3 lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0,0,1);
auto viewport_height = 2.0;
auto viewport_width = aspect_ratio * viewport_height;
auto focal_length = 1.0;

auto origin = point3(0, 0, 0);
auto horizontal = vec3(viewport_width, 0, 0);
auto vertical = vec3(0, viewport_height, 0);
auto lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++

for (int j = image_height-1; j >= 0; --j) {
std::cerr << "\rScanlines remaining: " << j << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto u = double(i) / (image_width-1);
auto v = double(j) / (image_height-1);
ray r(origin, lower_left_corner + u*horizontal + v*vertical);
ray r(origin, lower_left_corner + u*horizontal + v*vertical - origin);
color pixel_color = ray_color(r);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
write_color(std::cout, pixel_color);
Expand Down Expand Up @@ -1201,10 +1217,14 @@

std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";

point3 lower_left_corner(-2.0, -1.0, -1.0);
vec3 horizontal(4.0, 0.0, 0.0);
vec3 vertical(0.0, 2.0, 0.0);
point3 origin(0.0, 0.0, 0.0);
auto viewport_height = 2.0;
auto viewport_width = aspect_ratio * viewport_height;
auto focal_length = 1.0;

auto origin = point3(0, 0, 0);
auto horizontal = vec3(viewport_width, 0, 0);
auto vertical = vec3(0, viewport_height, 0);
auto lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length);


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
Expand Down Expand Up @@ -1313,7 +1333,8 @@
</div>

<div class='together'>
Putting that all together yields a camera class encapsulating our simple axis-aligned camera from
Now's a good time to create a `camera` class to manage our virtual camera and the related tasks of
scene scampling. The following class implements a simple camera using the axis-aligned camera from
before:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
Expand All @@ -1325,10 +1346,15 @@
class camera {
public:
camera() {
lower_left_corner = point3(-2.0, -1.0, -1.0);
horizontal = vec3(4.0, 0.0, 0.0);
vertical = vec3(0.0, 2.0, 0.0);
origin = point3(0.0, 0.0, 0.0);
auto aspect_ratio = 16.0 / 9.0;
auto viewport_height = 2.0;
auto viewport_width = aspect_ratio * viewport_height;
auto focal_length = 1.0;

origin = point3(0, 0, 0);
horizontal = vec3(viewport_width, 0.0, 0.0);
vertical = vec3(0.0, viewport_height, 0.0);
lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length);
}

ray get_ray(double u, double v) const {
Expand Down Expand Up @@ -1399,9 +1425,11 @@
world.add(make_shared<sphere>(point3(0,0,-1), 0.5));
world.add(make_shared<sphere>(point3(0,-100.5,-1), 100));


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
camera cam;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++

for (int j = image_height-1; j >= 0; --j) {
std::cerr << "\rScanlines remaining: " << j << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
Expand Down Expand Up @@ -1882,7 +1910,6 @@
double t;
bool front_face;


inline void set_face_normal(const ray& r, const vec3& outward_normal) {
front_face = dot(r.direction(), outward_normal) < 0;
normal = front_face ? outward_normal :-outward_normal;
Expand Down Expand Up @@ -2107,6 +2134,7 @@
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++

camera cam;

for (int j = image_height-1; j >= 0; --j) {
std::cerr << "\rScanlines remaining: " << j << ' ' << std::flush;
for (int i = 0; i < image_width; ++i) {
Expand Down Expand Up @@ -2551,16 +2579,17 @@
double vfov, // vertical field-of-view in degrees
double aspect_ratio
) {
origin = point3(0.0, 0.0, 0.0);

auto theta = degrees_to_radians(vfov);
auto half_height = tan(theta/2);
auto half_width = aspect_ratio * half_height;
auto h = tan(theta/2);
auto viewport_height = 2.0 * h;
auto viewport_width = aspect_ratio * viewport_height;

lower_left_corner = point3(-half_width, -half_height, -1.0);
auto focal_length = 1.0;

horizontal = vec3(2*half_width, 0.0, 0.0);
vertical = vec3(0.0, 2*half_height, 0.0);
origin = point3(0, 0, 0);
horizontal = vec3(viewport_width, 0.0, 0.0);
vertical = vec3(0.0, viewport_height, 0.0);
lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++

Expand Down Expand Up @@ -2626,29 +2655,28 @@
public:
camera(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
point3 lookfrom, point3 lookat, vec3 vup,
point3 lookfrom,
point3 lookat,
vec3 vup,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
double vfov, // vertical field-of-view in degrees
double aspect_ratio
) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
origin = lookfrom;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 u, v, w;

auto theta = degrees_to_radians(vfov);
auto half_height = tan(theta/2);
auto half_width = aspect_ratio * half_height;
auto h = tan(theta/2);
auto viewport_height = 2.0 * h;
auto viewport_width = aspect_ratio * viewport_height;

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);

lower_left_corner = origin - half_width*u - half_height*v - w;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto w = unit_vector(lookfrom - lookat);
auto u = unit_vector(cross(vup, w));
auto v = cross(w, u);

horizontal = 2*half_width*u;
vertical = 2*half_height*v;
origin = lookfrom;
horizontal = viewport_width * u;
vertical = viewport_height * v;
lower_left_corner = origin - horizontal/2 - vertical/2 - w;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}

Expand Down Expand Up @@ -2701,16 +2729,19 @@
The reason we defocus blur in real cameras is because they need a big hole (rather than just a
pinhole) to gather light. This would defocus everything, but if we stick a lens in the hole, there
will be a certain distance where everything is in focus. You can think of a lens this way: all light
rays coming _from_ a specific point at the focal distance -- and that hit the lens -- will be bent
rays coming _from_ a specific point at the focus distance -- and that hit the lens -- will be bent
back _to_ a single point on the image sensor.

In a physical camera, the distance to that plane where things are in focus is controlled by the
distance between the lens and the film/sensor. That is why you see the lens move relative to the
camera when you change what is in focus (that may happen in your phone camera too, but the sensor
moves). The “aperture” is a hole to control how big the lens is effectively. For a real camera, if
you need more light you make the aperture bigger, and will get more defocus blur. For our virtual
camera, we can have a perfect sensor and never need more light, so we only have an aperture when we
want defocus blur.
We call the distance between the projection point and the plane where everything is in perfect focus
the _focus distance_. Be aware that the focus distance is not the same as the focal length -- the
_focal length_ is the distance between the projection point and the image plane.

In a physical camera, the focus distance is controlled by the distance between the lens and the
film/sensor. That is why you see the lens move relative to the camera when you change what is in
focus (that may happen in your phone camera too, but the sensor moves). The “aperture” is a hole to
control how big the lens is effectively. For a real camera, if you need more light you make the
aperture bigger, and will get more defocus blur. For our virtual camera, we can have a perfect
sensor and never need more light, so we only have an aperture when we want defocus blur.


A Thin Lens Approximation
Expand All @@ -2728,8 +2759,8 @@
<div class="together">
We don’t need to simulate any of the inside of the camera. For the purposes of rendering an image
outside the camera, that would be unnecessary complexity. Instead, I usually start rays from the
surface of the lens, and send them toward a virtual film plane, by finding the projection of the
film on the plane that is in focus (at the distance `focus_dist`).
lens, and send them toward the focus plane (`focus_dist` away from the lens), where everything on
that plane is in perfect focus.

![Figure [cam-film-plane]: Camera focus plane](../images/fig.cam-film-plane.jpg)

Expand Down Expand Up @@ -2759,31 +2790,35 @@
class camera {
public:
camera(
point3 lookfrom, point3 lookat, vec3 vup,
point3 lookfrom,
point3 lookat,
vec3 vup,
double vfov, // vertical field-of-view in degrees
double aspect_ratio,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double aspect_ratio, double aperture, double focus_dist
) {
origin = lookfrom;
lens_radius = aperture / 2;
double aperture,
double focus_dist
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++

) {
auto theta = degrees_to_radians(vfov);
auto half_height = tan(theta/2);
auto half_width = aspect_ratio * half_height;
auto h = tan(theta/2);
auto viewport_height = 2.0 * h;
auto viewport_width = aspect_ratio * viewport_height;


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++

origin = lookfrom;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
lower_left_corner = origin
- half_width * focus_dist * u
- half_height * focus_dist * v
- focus_dist * w;
horizontal = focus_dist * viewport_width * u;
vertical = focus_dist * viewport_height * v;
lower_left_corner = origin - horizontal/2 - vertical/2 - focus_dist*w;

horizontal = 2*half_width*focus_dist*u;
vertical = 2*half_height*focus_dist*v;
lens_radius = aperture / 2;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}

Expand Down
38 changes: 20 additions & 18 deletions books/RayTracingTheNextWeek.html
Original file line number Diff line number Diff line change
Expand Up @@ -108,35 +108,37 @@
class camera {
public:
camera(
point3 lookfrom, point3 lookat, vec3 vup,
point3 lookfrom,
point3 lookat,
vec3 vup,
double vfov, // vertical field-of-view in degrees
double aspect_ratio, double aperture, double focus_dist,
double aspect_ratio,
double aperture,
double focus_dist,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double t0 = 0, double t1 = 0
double t0 = 0,
double t1 = 0
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
) {
origin = lookfrom;
lens_radius = aperture / 2;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
time0 = t0;
time1 = t1;

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto theta = degrees_to_radians(vfov);
auto half_height = tan(theta/2);
auto half_width = aspect * half_height;
auto h = tan(theta/2);
auto viewport_height = 2.0 * h;
auto viewport_width = aspect_ratio * viewport_height;

w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);

lower_left_corner = origin
- half_width*focus_dist*u
- half_height*focus_dist*v
- focus_dist*w;
origin = lookfrom;
horizontal = focus_dist * viewport_width * u;
vertical = focus_dist * viewport_height * v;
lower_left_corner = origin - horizontal/2 - vertical/2 - focus_dist*w;

horizontal = 2*half_width*focus_dist*u;
vertical = 2*half_height*focus_dist*v;
lens_radius = aperture / 2;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
time0 = t0;
time1 = t1;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}

ray get_ray(double s, double t) const {
Expand Down
Loading