nDisplay Merger helps you composite images rendered with nDisplay using Unreal Engine’s Movie Render Queue (UE 5.1+). The desktop app (ui.py / nDisplayMerger.exe) has two tabs:
- Config Merger — stitches nDisplay viewports into one image per frame using your
.ndisplayOutput Mapping (original workflow). - Stereo VR Merger — takes left eye and right eye cubemap renders (six square face images per eye per frame), runs them through py360convert (cube map → equirectangular). PNG inputs carry alpha through to PNG output (each color/alpha channel is resampled like RGB). Choose output mode: Equirectangular stereo (over/under) for one stacked image per frame, or Equirectangular mono for separate equirectangular files per eye under
left_eye/andright_eye/.
When rendering nDisplay with Movie Render Queue, Unreal outputs one image per viewport per frame and does not compose the viewports according to the Output Mapping in the nDisplay configuration.
This tab takes:
- The folder with the rendered images
- The nDisplay configuration file (the same one that defines the Output Mapping)
and produces one merged image per frame, laid out exactly as defined in the nDisplay config.
Input and output naming use Movie Render Queue–style templates with {placeholders} (type { in the UI for a keyword list). The input template must include {camera_name}, {frame_number}, and {ext} (.jpeg, .jpg, or .png only; EXR is not supported). Defaults: input {sequence_name}.{camera_name}.{frame_number}.{ext}, output {sequence_name}.{frame_number}.{ext}. Every output placeholder must appear in the input template; merged output uses the same image format as the inputs (PNG or JPEG).
If {render_pass} is in the input template, jobs are keyed by (render pass, frame) so passes are never mixed in one composite. {render_pass} is required in the output template when more than one pass is processed (e.g. several checkboxes selected in the GUI, or more than one distinct pass in the batch when using the CLI without a pass filter); with only one pass selected or present, you may omit {render_pass} from the output path. The GUI shows a Render passes section (all passes checked by default); unchecked passes are skipped. Validation errors for missing viewports include the render pass and frame. In settings.json, input/output naming strings are written when you click Run (not when closing the app); render-pass checkbox choices are saved on close and again when a run starts.
Use this when you have two folders of cubemap face renders:
- Left eye directory — for each frame, six images whose viewport names include the face tokens FRONT, BACK, LEFT, RIGHT, UP, DOWN (matched as separate words; case-insensitive).
- Right eye directory — the same frame numbers and the same face layout as the left folder.
Use Input naming / Output naming with {camera_name}, {frame_number}, {ext}, and other MRQ tokens as needed. The {camera_name} segment must encode the cubemap face (FRONT, BACK, LEFT, RIGHT, UP, DOWN as separate tokens). Default input matches Config Merger: {sequence_name}.{camera_name}.{frame_number}.{ext}. With {render_pass} in the input template, stereo processing groups by pass and frame; {render_pass} in the output template is required only when multiple passes are processed (same rule as Config Merger). The GUI Render passes checkboxes control which passes run (same persistence rules as Config Merger).
Output modes (GUI Output mode dropdown, or output_mode when calling stereo_merger.main in Python):
| Mode | Default output pattern |
|---|---|
| Equirectangular stereo (over/under) | One stacked image per frame — {sequence_name}.StereoEquirect.{frame_number}.{ext} (no {eye}). |
| Equirectangular mono | One equirectangular file per eye — {eye}/{sequence_name}.Equirect.{frame_number}.{ext} with {eye} = left_eye or right_eye. |
Over/under templates must not use {eye}; mono templates must include {eye}. Output format follows {ext} from the inputs (PNG or JPEG). EXR is not supported.
If you leave output empty, the base folder defaults to merged_stereo next to the parent of the left eye folder; subfolders come from your output naming template (defaults include left_eye / right_eye for mono).
Memory: each stereo frame uses a lot of RAM inside py360convert (especially at 2K/4K face resolution). In the UI, use a low Workers value (often 1–2) on this tab if you see out-of-memory issues. Stereo conversion is invoked from the GUI or by calling stereo_merger.main(...) in Python; there is no separate stereo CLI entry point.
- Python 3.9+ (tested with 3.9/3.10)
- Unreal Engine 5.1 or later (for Movie Render Queue with nDisplay)
- The Python dependencies in
requirements.txt(includes simplejpeg / libjpeg-turbo for faster JPEG read/write, plus numpy, scipy, and py360convert for the Stereo VR tab)
Create and activate a virtual environment (recommended):
# from the project root
python -m venv .venv
# Windows (PowerShell)
.venv\Scripts\Activate.ps1
# Windows (cmd.exe)
.venv\Scripts\activate.bat
pip install -r requirements.txtLaunch nDisplayMerger.exe (see Compile to Executable) or python ui.py. Use Run / Pause / Resume and Stop in the footer; set Start frame / End frame and Workers on the active tab. The ? button on each tab opens detailed help.
-
Create your nDisplay config
-
Render with Movie Render Queue (nDisplay)
-
Run the merger
- Select the input directory (rendered images) and the nDisplay config (
.ndisplay). - Set Input naming and Output naming if your MRQ file pattern differs from the defaults.
- Optionally set an output directory; otherwise output goes to a
mergedfolder inside the input directory. - Adjust Workers if you want more or fewer parallel frame jobs.

- Select the input directory (rendered images) and the nDisplay config (
-
Review the result
- Render left and right cubemap face sequences into two separate folders (same frame numbering; six faces per frame per eye, with FRONT/BACK/LEFT/RIGHT/UP/DOWN in the
{camera_name}segment, as in the in-app help). - Choose Left eye and Right eye directories, Input naming / Output naming (defaults match MRQ-style sequence/camera/frame/ext), Output mode (over/under vs per-eye mono), optional output path, frame range, and Workers (keep low for heavy resolutions).
- Click Run and open the output folder when the job finishes.
You can also run nDisplay Merger directly from the command line:
python .\nDisplayMerger.py .\Example\MovieRenders .\Example\nDisplayConfig.ndisplayWhere:
.\Example\MovieRendersis the folder with the rendered viewport images..\Example\nDisplayConfig.ndisplayis the exported nDisplay config file.
Optional: --jobs N sets how many frames merge in parallel (default: up to 16 workers, capped by CPU count). Use --jobs 1 for sequential processing.
This will create a merged folder inside .\Example\MovieRenders with one composed image per frame.
Stereo VR processing is not exposed as a separate CLI script; use the Stereo VR Merger tab or call stereo_merger.main(...) from Python.
If you want to ship a standalone executable (no Python required for end users), you can build it with PyInstaller:
python -m PyInstaller --onefile --windowed ui.py --additional-hooks-dir=. --name=nDisplayMerger --icon=assets\app.icoThis will generate dist\nDisplayMerger.exe, which you can distribute to artists/TDs. The application name is nDisplay Merger, and the executable file name is nDisplayMerger.exe.






