Search Documentation
Distortion Details
4 min read
Distortion Overview #
Lens distortion is the amount that the lens optics bend the incoming light rays compared to a straight geometric projection. It’s most obvious in wide angle lenses, where the curved ‘barrel distortion’ effect is visible on straight lines, but it occurs as well on longer lenses (usually ‘pincushion’ distortion.
Distortion needs to be removed to clearly see how well a given camera track aligns with a (typically undistorted) view of a 3D model.
The high level point: the kinds of shots that are typically really hard to track in post (like medium close-ups with blurred backgrounds) work great with Jetset Cine’s live tracking data, and the kinds of shots that need more tracking refinement (like shots with heavy ground contact or live action to CGI intersections) are well suited for post production tracking refinement using the Jetset tracking data as a jumping-off point.
Methods of Removing Distortion #
There are several methods of distortion that get used frequently:
- Shooting a ‘grid’ or ‘checkerboard’, and using tracking software to recognize the patterns and undistort them
- Using lines in the footage that should be straight, and adjusting the distortion visually until they are straight
- Solving the distortion in the 3D camera tracking software. Camera solvers are using a non-linear least squares estimator to find the camera position with the least reprojection error, and can solve for distortion while they are simultaneously solving for camera pose and FOV.
How Jetset Cine Calculates Distortion #
Jetset Cine uses a version of #3, which is to solve for the lens distortion as part of the process of solving for the cine lens’ physical offset from the iPhone lens, as well as the cine lens field of view.
It uses the natural feature points in the scene that are visible during the calibration process as small red squares. When correlating matches are detected in a test frame, Jetset draws lines between the matching feature points.
When the user presses Solve in Autoshot, Autoshot is basically doing a small photogrammetry type solve, and matching the feature points between all of the images. The result is the 3D offset, FOV, and distortion of the cine lens.
Optimizing the Jetset Cine Solve #
Since the calibration process uses natural features, it’s a good idea to use a location with a lot of feature points that show up. A herringbone patterned carpet works wonders, or the usual clutter of a soundstage works well. You want the features to be well lit, so it’s also a good idea to throw light on whatever you’re shooting to calibrate.
It’s important to note that once the solve is locked, Jetset Cine doesn’t care as much about finding a ton of feature points. So you can increase the stage lighting during calibration to make sure Cine finds a lot of well distributed feature points. Since most of the lens distortion is found at the edges of the frame, try to get points at the edge of the frame to get an accurate distortion solve.
It’s also important to realize the some of the distortion problem can be solved more easily in post production, so getting a decent Cine solve quickly is usually better than holding up production.
Jetset Cine Distortion Model #
Jetset Cine uses a distortion model with a single term of distortion (usually called k1). This is simple, provides stable solves, and works well with most lenses to calculate an accurate offset and FOV of the cine lens. Autoshot will generate a STMap of the distortion values as part of the solve process, and for many shots that can be used directly in Nuke or other applications.
How Good is the Live Tracking? #
For many kinds of shots that don’t have visible ground contact, or a visible CGI to live action seam, the default Jetset Cine tracking data and solve is frequently fine. Here’s an example of a finished shot with no post-tracking, using the Jetset Cine track straight out of Autoshot.
This is a good example of a shot that would be more difficult to track in post with traditional methods, as the background is out of focus, but the on-set tracking data works fine.
This kind of shot (medium-close, background defocused, etc.) is very common, so in many cases a large fraction of a project’s shots can use the on-set tracking data.
Post Production Tracking Refinement #
More sophisticated distortion and tracking is required only when shots have a lot of visible ground contact. The good news is that this is exactly the kind of shot that usually has enough visible surface detail to enable post production tracking software to refine the Jetset solve with improved distortion models.
Autoshot has an exporter to both Nuke and Syntheyes. Each of these has a good 3D camera tracking solver, but the approach used is a little different as each solver has different features.
With Nuke, we use a ‘solve and re-orient’ method to align the Nuke solve rapidly to the Jetset solve. This fixes the usual monocular solve problems of lacking scale and orientation info, and preserves the 3D orientation of the Jetset track automatically. That method will be covered in a separate tutorial.
With Syntheyes, there are several paths that can be used:
- We can ‘seed’ the camera path with our initial Jetset solve, and then weight our original solve to ‘hard lock’ or ‘soft lock’ the Syntheyes solver to the original Jetset footage
- Since the Syntheyes exporter writes out the scene 3D scan geometry correctly transformed by the scene locator used during the take, we can use ‘survey’ methods more commonly used in high end VFX productions to lock the solve to 3D geometry. This becomes extremely useful when dealing with more complex lenses like anamorphic lenses.
With all of these techniques, the key concept is that the original 3D camera tracked positions that were generated on set and reference to the 3D models in Jetset are preserved and transferred through the post production process automatically. This preserves the original on-set framing intent without guessing scale and position.