An advanced, real-time stereo rendering engine for 2D-to-3D conversion in VR and stereoscopic displays.
- Avoids depth slicing or zone segmentation.
- Uses soft weights from the depth map to mix foreground, midground, and background contributions:
raw_shift = (fg_weight * fg_shift +
mg_weight * mg_shift +
bg_weight * bg_shift)- This enables smooth and natural parallax gradients across the image with no seams or pop-ins.
- Calculates the convergence plane dynamically by analyzing the mode of a center-weighted histogram from the depth map:
subject_depth = estimate_subject_depth(depth_tensor)-
Produces a subject-anchored stereo window to keep the scene's focus point at screen depth.
-
Final zero-parallax shift is calculated as:
zero_parallax_offset = ((-subject_depth * fg) + (-subject_depth * mg) + (subject_depth * bg)) / (resized_width / 2)- Replaces the original convergence formula, improving tracking accuracy.
- Uses gradient-based masking to suppress parallax near high-contrast edges:
edge_mask = torch.sigmoid((grad_mag - edge_threshold) * feather_strength * 5)
smooth_mask = 1.0 - edge_mask
total_shift = total_shift * smooth_mask- Prevents hard ghosting or edge bleed with no pre-processing.
- Smooths convergence shifts over time with adaptive momentum tracking:
zero_parallax_offset = floating_window_tracker.smooth_offset(zero_parallax_offset)- Clamps offset within bounds and applies side masking for viewer comfort.
- Dynamically adjusts stereo strength based on scene flatness:
parallax_scale = compute_dynamic_parallax_scale(depth_tensor)- Ensures stable 3D across both action scenes and low-contrast shots.
- CUDA-accelerated
grid_samplewarping of the frame based on shift values:
grid_left[..., 0] += shift_vals
grid_right[..., 0] -= shift_vals- Full left/right stereo generation without latency using PyTorch tensor operations.
- Adaptive DOF: Multiple Gaussian blurred versions composited using per-pixel depth weight:
blur_idx = blur_weights * (len(levels) - 1)- Pixel Healing: Smart fill of stereo occlusion zones using gradient-based mask blending.
| Component | Description |
|---|---|
| Depth-Weighted Shift | Smooth pixel displacement without discrete zones |
| Subject-Aware Parallax Tracking | Dynamically centers stereo plane on dominant object |
| Edge-Aware Feathering | Prevents ghosting without segmentation |
| Floating Window Tracker | Stabilizes convergence across frames |
| Scene-Aware Parallax Dampening | Auto-tunes 3D intensity by depth stats |
| DOF + Healing | Simulated depth of field with gap healing |
| GPU Tensor Grid Warping | High-performance stereo image warping |
- All features above are original to VisionDepth3D.
- Designed and optimized in-house for real-time VR stereo authoring.
- No external segmentation, warping, or AI fill-inpainting required.
For citations or inquiries: https://github.com/VisionDepth/VisionDepth3D
📄 Licensed under: VisionDepth3D Custom Use License (No Derivatives)