We will be implementing a pathtracer that renders images that simulates realistic camera lenses and has contrast-based autofocus. The camera lens simulator will support different types of custom lenses with modifiable parameters like radius and aperture, and the contrast-based autofocus feature will focus on a selected patch of an image automatically.
The goal of this project is to implement path-tracing lenses and contrast-based autofocus. In doing so, we will develop an interface that allows us to gain insight on how different lens types/parameters may improve rendered image quality either across the board, or for scenes with particular characteristics. We also intend to investigate the effectiveness of contrast-based autofocus on various scenes, both in terms of quality improvements over no focus as well as its performance cost. Developing this project and looking into these subjects can deliver insight on both the convenience and effectiveness of autofocus, as well as the effects which simulating a variety of accurate lenses can have. A particularly useful application of simulating lenses is in improving realism of renders, which is advantageous for modeling the real world.
In the first part of the project, we will implement the simulation of passing a ray through various lenses. In addition, we implement various methods in the Lens and LensCamera class. These methods allow us to set the parameters of the camera, computing the focus depth, and sampling a point in the back of the lens element. In the second part, we will utilize heuristics to determine how “in focus” a patch of an image is. The focus metric we intend to use is the variance of the image patch. Blurrier images tend to have lower variance while sharper images have higher variance. We will also implement auto-focus search which estimates the depth where the image patch has the highest focus metric. In the last part, we will speed up our autofocus search.
We plan on implementing a fully-functional pathtracer with support for different camera lenses (both simple/ideal and complex/compound) as well as autofocus based on the camera’s current perspective. With these features, we intend to investigate and compare the effect that lenses with different structures/parameters have on the rendering “quality” and speed of scenes with varying characteristics. We will also implement free movement of the camera so that the same scene can be shown from different perspectives, highlighting how the autofocus changes on the same scene shown from different angles and positions. Our demos will involve creating pictures showing off the effect of the autofocus on several scenes, such as the existing scenes given to us in past projects, along with other interesting ones we find online. We will also render some scenes using various lenses. These rendered results will be compared with scenes rendered using more basic setups, such as without autofocus/using the existing click-based autofocus. We will also include a GUI to support features such as importing different lenses and changing the camera perspective.
If enough time remains after implementing the above, we hope to also implement significant speedups for autofocus functionality, and investigate what kind of algorithms show the most significant improvements in . We hope to quantify the speedups by measuring the time to render the same scene from equivalent camera perspectives with and without speedups, and show the results via graphs indicating computation time for scenes of varying complexity/contrasting features. In judging the quality of scenes, it’s harder to come up with precise metrics, so we will judge them by signs of aliasing and how “good they look”. We will also augment our GUI to support swapping between different speedup methods, such as different heuristics, if our speedup implementations end up using something along those lines. If the speedups are sufficient, we will also aim to create real-time demos where the camera can be moved around while having the autofocus applied concurrently, which may end up involving the usage of GPU ray tracing. Additionally, we hope to be able to implement other features such as texture mapping for path tracing or iridescence and investigate how the autofocus effect interacts with such scenes (perhaps without a real-time demo, depending on the run time of these features).
We will be using https://cs184.eecs.berkeley.edu/sp16/article/19 as the spec for the base features (raytracing through lens, autofocus) of our project.
We will be using our personal hardware to render and run demos.