From ddaf7ae051bb312189cc023c75788385229235e8 Mon Sep 17 00:00:00 2001 From: Steve Hollasch Date: Fri, 15 May 2020 15:38:33 -0700 Subject: [PATCH 1/5] Update camera code In the prior code, changes to camera variables would require associated changes to a number of other related variables, in ways that were not apparent. This change uses variables a more consistent manner, where changing any particular variable should yield consistent results in other values. In addition, this provides a more consistent series of iterative steps as the camera develops through the text. Resolves #527 --- books/RayTracingInOneWeekend.html | 145 ++++++++++++++++++------------ books/RayTracingTheNextWeek.html | 38 ++++---- src/common/camera.h | 34 +++---- 3 files changed, 126 insertions(+), 91 deletions(-) diff --git a/books/RayTracingInOneWeekend.html b/books/RayTracingInOneWeekend.html index ebd77a1f..d1aa2ab0 100644 --- a/books/RayTracingInOneWeekend.html +++ b/books/RayTracingInOneWeekend.html @@ -466,12 +466,21 @@ that returns the color of the background (a simple gradient). I’ve often gotten into trouble using square images for debugging because I transpose $x$ and $y$ too -often, so I’ll stick with a 200×100 image. I’ll put the “eye” (or camera center if you think of a -camera) at $(0,0,0)$. I will have the y-axis go up, and the x-axis to the right. In order to respect -the convention of a right handed coordinate system, into the screen is the negative z-axis. I will -traverse the screen from the lower left hand corner, and use two offset vectors along the screen -sides to move the ray endpoint across the screen. Note that I do not make the ray direction a unit -length vector because I think not doing that makes for simpler and slightly faster code. +often, so I’ll use a non-square image. For now we'll use a 16:9 aspect ratio, since that's so +common. + +In addition to setting up the pixel dimensions for the rendered image, we also need to set up a +virtual viewport through which to pass our scene rays. For the standard square pixel spacing, the +viewport's aspect ratio should be the same as our rendered image. We'll just pick a viewport two +units in height. Ultimately, changing the scale of the viewport (while holding the focal distance +constant) is equivalent to changing the viewing angle, or “zoom” of the image. + +I’ll put the “eye” (or camera center if you think of a camera) at $(0,0,0)$. I will have the y-axis +go up, and the x-axis to the right. In order to respect the convention of a right handed coordinate +system, into the screen is the negative z-axis. I will traverse the screen from the lower left hand +corner, and use two offset vectors along the screen sides to move the ray endpoint across the +screen. Note that I do not make the ray direction a unit length vector because I think not doing +that makes for simpler and slightly faster code. ![Figure [cam-geom]: Camera geometry](../images/fig.cam-geom.jpg) @@ -497,19 +506,25 @@ std::cout << "P3\n" << image_width << " " << image_height << "\n255\n"; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - point3 origin(0.0, 0.0, 0.0); - vec3 horizontal(4.0, 0.0, 0.0); - vec3 vertical(0.0, 2.25, 0.0); - point3 lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0,0,1); + auto viewport_height = 2.0; + auto viewport_width = aspect_ratio * viewport_height; + auto focal_length = 1.0; + + auto origin = point3(0, 0, 0); + auto horizontal = vec3(viewport_width, 0, 0); + auto vertical = vec3(0, viewport_height, 0); + auto lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ + for (int j = image_height-1; j >= 0; --j) { std::cerr << "\rScanlines remaining: " << j << ' ' << std::flush; for (int i = 0; i < image_width; ++i) { ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight auto u = double(i) / (image_width-1); auto v = double(j) / (image_height-1); - ray r(origin, lower_left_corner + u*horizontal + v*vertical); + ray r(origin, lower_left_corner + u*horizontal + v*vertical - origin); color pixel_color = ray_color(r); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ write_color(std::cout, pixel_color); @@ -1201,10 +1216,14 @@ std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n"; - point3 lower_left_corner(-2.0, -1.0, -1.0); - vec3 horizontal(4.0, 0.0, 0.0); - vec3 vertical(0.0, 2.0, 0.0); - point3 origin(0.0, 0.0, 0.0); + auto viewport_height = 2.0; + auto viewport_width = aspect_ratio * viewport_height; + auto focal_length = 1.0; + + auto origin = point3(0, 0, 0); + auto horizontal = vec3(viewport_width, 0, 0); + auto vertical = vec3(0, viewport_height, 0); + auto lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight @@ -1313,7 +1332,8 @@
-Putting that all together yields a camera class encapsulating our simple axis-aligned camera from +Now's a good time to create a `camera` class to manage our virtual camera and the related tasks of +scene scampling. The following class implements a simple camera using the axis-aligned camera from before: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ @@ -1325,10 +1345,15 @@ class camera { public: camera() { - lower_left_corner = point3(-2.0, -1.0, -1.0); - horizontal = vec3(4.0, 0.0, 0.0); - vertical = vec3(0.0, 2.0, 0.0); - origin = point3(0.0, 0.0, 0.0); + auto aspect_ratio = 16.0 / 9.0; + auto viewport_height = 2.0; + auto viewport_width = aspect_ratio * viewport_height; + auto focal_length = 1.0; + + origin = point3(0, 0, 0); + horizontal = vec3(viewport_width, 0.0, 0.0); + vertical = vec3(0.0, viewport_height, 0.0); + lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length); } ray get_ray(double u, double v) const { @@ -1399,9 +1424,11 @@ world.add(make_shared(point3(0,0,-1), 0.5)); world.add(make_shared(point3(0,-100.5,-1), 100)); + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight camera cam; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ + for (int j = image_height-1; j >= 0; --j) { std::cerr << "\rScanlines remaining: " << j << ' ' << std::flush; for (int i = 0; i < image_width; ++i) { @@ -1882,7 +1909,6 @@ double t; bool front_face; - inline void set_face_normal(const ray& r, const vec3& outward_normal) { front_face = dot(r.direction(), outward_normal) < 0; normal = front_face ? outward_normal :-outward_normal; @@ -2107,6 +2133,7 @@ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ camera cam; + for (int j = image_height-1; j >= 0; --j) { std::cerr << "\rScanlines remaining: " << j << ' ' << std::flush; for (int i = 0; i < image_width; ++i) { @@ -2551,16 +2578,17 @@ double vfov, // vertical field-of-view in degrees double aspect_ratio ) { - origin = point3(0.0, 0.0, 0.0); - auto theta = degrees_to_radians(vfov); - auto half_height = tan(theta/2); - auto half_width = aspect_ratio * half_height; + auto h = tan(theta/2); + auto viewport_height = 2.0 * h; + auto viewport_width = aspect_ratio * viewport_height; - lower_left_corner = point3(-half_width, -half_height, -1.0); + auto focal_length = 1.0; - horizontal = vec3(2*half_width, 0.0, 0.0); - vertical = vec3(0.0, 2*half_height, 0.0); + origin = point3(0, 0, 0); + horizontal = vec3(viewport_width, 0.0, 0.0); + vertical = vec3(0.0, viewport_height, 0.0); + lower_left_corner = origin - horizontal/2 - vertical/2 - vec3(0, 0, focal_length); } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ @@ -2626,29 +2654,28 @@ public: camera( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - point3 lookfrom, point3 lookat, vec3 vup, + point3 lookfrom, + point3 lookat, + vec3 vup, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ double vfov, // vertical field-of-view in degrees double aspect_ratio ) { - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - origin = lookfrom; - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ - vec3 u, v, w; - auto theta = degrees_to_radians(vfov); - auto half_height = tan(theta/2); - auto half_width = aspect_ratio * half_height; + auto h = tan(theta/2); + auto viewport_height = 2.0 * h; + auto viewport_width = aspect_ratio * viewport_height; - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - w = unit_vector(lookfrom - lookat); - u = unit_vector(cross(vup, w)); - v = cross(w, u); - lower_left_corner = origin - half_width*u - half_height*v - w; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight + auto w = unit_vector(lookfrom - lookat); + auto u = unit_vector(cross(vup, w)); + auto v = cross(w, u); - horizontal = 2*half_width*u; - vertical = 2*half_height*v; + origin = lookfrom; + horizontal = viewport_width * u; + vertical = viewport_height * v; + lower_left_corner = origin - horizontal/2 - vertical/2 - w; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ } @@ -2759,31 +2786,35 @@ class camera { public: camera( - point3 lookfrom, point3 lookat, vec3 vup, + point3 lookfrom, + point3 lookat, + vec3 vup, double vfov, // vertical field-of-view in degrees + double aspect_ratio, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - double aspect_ratio, double aperture, double focus_dist - ) { - origin = lookfrom; - lens_radius = aperture / 2; + double aperture, + double focus_dist ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ - + ) { auto theta = degrees_to_radians(vfov); - auto half_height = tan(theta/2); - auto half_width = aspect_ratio * half_height; + auto h = tan(theta/2); + auto viewport_height = 2.0 * h; + auto viewport_width = aspect_ratio * viewport_height; + + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight w = unit_vector(lookfrom - lookat); u = unit_vector(cross(vup, w)); v = cross(w, u); + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ + origin = lookfrom; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - lower_left_corner = origin - - half_width * focus_dist * u - - half_height * focus_dist * v - - focus_dist * w; + horizontal = focus_dist * viewport_width * u; + vertical = focus_dist * viewport_height * v; + lower_left_corner = origin - horizontal/2 - vertical/2 - focus_dist*w; - horizontal = 2*half_width*focus_dist*u; - vertical = 2*half_height*focus_dist*v; + lens_radius = aperture / 2; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ } diff --git a/books/RayTracingTheNextWeek.html b/books/RayTracingTheNextWeek.html index 27b9a74e..5c3b5608 100644 --- a/books/RayTracingTheNextWeek.html +++ b/books/RayTracingTheNextWeek.html @@ -108,35 +108,37 @@ class camera { public: camera( - point3 lookfrom, point3 lookat, vec3 vup, + point3 lookfrom, + point3 lookat, + vec3 vup, double vfov, // vertical field-of-view in degrees - double aspect_ratio, double aperture, double focus_dist, + double aspect_ratio, + double aperture, + double focus_dist, ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - double t0 = 0, double t1 = 0 + double t0 = 0, + double t1 = 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ ) { - origin = lookfrom; - lens_radius = aperture / 2; - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight - time0 = t0; - time1 = t1; - - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ auto theta = degrees_to_radians(vfov); - auto half_height = tan(theta/2); - auto half_width = aspect * half_height; + auto h = tan(theta/2); + auto viewport_height = 2.0 * h; + auto viewport_width = aspect_ratio * viewport_height; w = unit_vector(lookfrom - lookat); u = unit_vector(cross(vup, w)); v = cross(w, u); - lower_left_corner = origin - - half_width*focus_dist*u - - half_height*focus_dist*v - - focus_dist*w; + origin = lookfrom; + horizontal = focus_dist * viewport_width * u; + vertical = focus_dist * viewport_height * v; + lower_left_corner = origin - horizontal/2 - vertical/2 - focus_dist*w; - horizontal = 2*half_width*focus_dist*u; - vertical = 2*half_height*focus_dist*v; + lens_radius = aperture / 2; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight + time0 = t0; + time1 = t1; + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ } ray get_ray(double s, double t) const { diff --git a/src/common/camera.h b/src/common/camera.h index 87b7c9ed..0e5293b2 100644 --- a/src/common/camera.h +++ b/src/common/camera.h @@ -19,31 +19,33 @@ class camera { camera() : camera(point3(0,0,-1), point3(0,0,0), vec3(0,1,0), 40, 1, 0, 10) {} camera( - point3 lookfrom, point3 lookat, vec3 vup, + point3 lookfrom, + point3 lookat, + vec3 vup, double vfov, // vertical field-of-view in degrees - double aspect_ratio, double aperture, double focus_dist, - double t0 = 0, double t1 = 0 + double aspect_ratio, + double aperture, + double focus_dist, + double t0 = 0, + double t1 = 0 ) { - origin = lookfrom; - lens_radius = aperture / 2; - time0 = t0; - time1 = t1; - auto theta = degrees_to_radians(vfov); - auto half_height = tan(theta/2); - auto half_width = aspect_ratio * half_height; + auto h = tan(theta/2); + auto viewport_height = 2.0 * h; + auto viewport_width = aspect_ratio * viewport_height; w = unit_vector(lookfrom - lookat); u = unit_vector(cross(vup, w)); v = cross(w, u); - lower_left_corner = origin - - half_width*focus_dist*u - - half_height*focus_dist*v - - focus_dist*w; + origin = lookfrom; + horizontal = focus_dist * viewport_width * u; + vertical = focus_dist * viewport_height * v; + lower_left_corner = origin - horizontal/2 - vertical/2 - focus_dist*w; - horizontal = 2*half_width*focus_dist*u; - vertical = 2*half_height*focus_dist*v; + lens_radius = aperture / 2; + time0 = t0; + time1 = t1; } ray get_ray(double s, double t) const { From 231f4e119ecca0e98c065f29cc0e652e7570e3d3 Mon Sep 17 00:00:00 2001 From: Steve Hollasch Date: Fri, 15 May 2020 15:55:49 -0700 Subject: [PATCH 2/5] Update changelog for camera improvements --- CHANGELOG.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index ad1e8e63..a69e63ec 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -3,6 +3,10 @@ Change Log -- Ray Tracing in One Weekend # v3.1.1 (in progress) +### Common + - Change: Camera code improvements to make it more robust when any particular value changes. Also, + the code develops in a smoother series of iterations as the book progresses. (#536) + ### _In One Weekend_ - Change: The C++ `` version of `random_double()` no longer depends on `` header. From 1764d5e9832dc08945a9523491244eaf3c907cdf Mon Sep 17 00:00:00 2001 From: Steve Hollasch Date: Sat, 16 May 2020 13:55:53 -0700 Subject: [PATCH 3/5] Clarify text around focal length vs focus distance --- books/RayTracingInOneWeekend.html | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) diff --git a/books/RayTracingInOneWeekend.html b/books/RayTracingInOneWeekend.html index d1aa2ab0..0a56d0a7 100644 --- a/books/RayTracingInOneWeekend.html +++ b/books/RayTracingInOneWeekend.html @@ -472,8 +472,9 @@ In addition to setting up the pixel dimensions for the rendered image, we also need to set up a virtual viewport through which to pass our scene rays. For the standard square pixel spacing, the viewport's aspect ratio should be the same as our rendered image. We'll just pick a viewport two -units in height. Ultimately, changing the scale of the viewport (while holding the focal distance -constant) is equivalent to changing the viewing angle, or “zoom” of the image. +units in height. We'll also set the distance between the projection plane and the projection point +to be one unit. This is referred to as the “focal length”, not to be confused with “focus distance”, +which we'll present later. I’ll put the “eye” (or camera center if you think of a camera) at $(0,0,0)$. I will have the y-axis go up, and the x-axis to the right. In order to respect the convention of a right handed coordinate @@ -2728,16 +2729,19 @@ The reason we defocus blur in real cameras is because they need a big hole (rather than just a pinhole) to gather light. This would defocus everything, but if we stick a lens in the hole, there will be a certain distance where everything is in focus. You can think of a lens this way: all light -rays coming _from_ a specific point at the focal distance -- and that hit the lens -- will be bent +rays coming _from_ a specific point at the focus distance -- and that hit the lens -- will be bent back _to_ a single point on the image sensor. -In a physical camera, the distance to that plane where things are in focus is controlled by the -distance between the lens and the film/sensor. That is why you see the lens move relative to the -camera when you change what is in focus (that may happen in your phone camera too, but the sensor -moves). The “aperture” is a hole to control how big the lens is effectively. For a real camera, if -you need more light you make the aperture bigger, and will get more defocus blur. For our virtual -camera, we can have a perfect sensor and never need more light, so we only have an aperture when we -want defocus blur. +We call the distance between the projection point and the plane where everything is in perfect focus +the _focus distance_. Be aware that the focus distance is not the same as the focal length -- the +_focal length_ is the distance between the projection point and the image plane. + +In a physical camera, the focus distance is controlled by the distance between the lens and the +film/sensor. That is why you see the lens move relative to the camera when you change what is in +focus (that may happen in your phone camera too, but the sensor moves). The “aperture” is a hole to +control how big the lens is effectively. For a real camera, if you need more light you make the +aperture bigger, and will get more defocus blur. For our virtual camera, we can have a perfect +sensor and never need more light, so we only have an aperture when we want defocus blur. A Thin Lens Approximation From bf4269cc498659469d394f1485d953b9b0e99d54 Mon Sep 17 00:00:00 2001 From: Steve Hollasch Date: Sat, 16 May 2020 14:27:16 -0700 Subject: [PATCH 4/5] book1: update text around generating focused rays --- books/RayTracingInOneWeekend.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/books/RayTracingInOneWeekend.html b/books/RayTracingInOneWeekend.html index 0a56d0a7..2bd10b84 100644 --- a/books/RayTracingInOneWeekend.html +++ b/books/RayTracingInOneWeekend.html @@ -2759,8 +2759,8 @@
We don’t need to simulate any of the inside of the camera. For the purposes of rendering an image outside the camera, that would be unnecessary complexity. Instead, I usually start rays from the -surface of the lens, and send them toward a virtual film plane, by finding the projection of the -film on the plane that is in focus (at the distance `focus_dist`). +lens aperture, and send them toward the projection plane (`focus_dist` away), where everything on +that plane is in perfect focus. ![Figure [cam-film-plane]: Camera focus plane](../images/fig.cam-film-plane.jpg) From f54ee9bebe498dc9c250cf4ecf71ac46764aa106 Mon Sep 17 00:00:00 2001 From: Steve Hollasch Date: Sat, 16 May 2020 15:35:08 -0700 Subject: [PATCH 5/5] Another rewording of the lens model --- books/RayTracingInOneWeekend.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/books/RayTracingInOneWeekend.html b/books/RayTracingInOneWeekend.html index 2bd10b84..31281a07 100644 --- a/books/RayTracingInOneWeekend.html +++ b/books/RayTracingInOneWeekend.html @@ -2759,7 +2759,7 @@
We don’t need to simulate any of the inside of the camera. For the purposes of rendering an image outside the camera, that would be unnecessary complexity. Instead, I usually start rays from the -lens aperture, and send them toward the projection plane (`focus_dist` away), where everything on +lens, and send them toward the focus plane (`focus_dist` away from the lens), where everything on that plane is in perfect focus. ![Figure [cam-film-plane]: Camera focus plane](../images/fig.cam-film-plane.jpg)