Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
132 changes: 132 additions & 0 deletions applications/virtual-fly-brain/demos/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# NRRD to Neuroglancer Precomputed Conversion with Mesh Generation

Convert NRRD segmentation volumes to Neuroglancer precomputed format with volumetric data and 3D surface meshes for visualization in neuroglass-research.

## Quick Start

### Prerequisites

```bash
# Create new conda environment with Python 3.12
conda create -n vfb python=3.12

# Activate the environment
conda activate vfb

# Install dependencies
pip install cloud-volume nrrd numpy requests scikit-image

# Install Node.js/npm if not already installed (for local HTTP server)
# macOS: brew install node
# Ubuntu: sudo apt install nodejs npm
```

### Workflow

```bash
# 1. Convert all 3 NRRD files to precomputed format
python3 convert_nrrd_to_precomputed. py \
--input-url "http://v2.virtualflybrain.org/data/VFB/i/0010/1567/VFB_00101567/volume.nrrd" \
--input-url "http://v2.virtualflybrain.org/data/VFB/i/0010/12vj/VFB_00101567/volume.nrrd" \
--input-url "http://v2.virtualflybrain.org/data/VFB/i/0010/101b/VFB_00101567/volume.nrrd" \
--output-path "file://~/precomputed" \
--verbose

# 2. Generate meshes and segment properties for all 3 datasets
python3 generate_and_setup_meshes.py \
--input-path "file://~/precomputed/VFB_00101567_1567" \
--input-path "file://~/precomputed/VFB_00101567_12vj" \
--input-path "file://~/precomputed/VFB_00101567_101b" \
--verbose

# 3. Start HTTP server (keep running in a terminal)
cd ~
npx http-server . -p 8080 --cors
```

This creates:
```
~/precomputed/VFB_00101567_1567/ # Dataset 1
~/precomputed/VFB_00101567_12vj/ # Dataset 2
~/precomputed/VFB_00101567_101b/ # Dataset 3
~/precomputed/neuroglancer_state.json # Pre-configured state with all 3 layers
```

## Visualization Options

### Option 1: Use neuroglass. io (Easiest)

1. Go to [https://neuroglass.io](https://neuroglass.io)
2. Create or Edit a new Study
3. Add a new layer:
- **Type**: `segmentation`
- **URL**: `http://localhost:8080/precomputed/VFB_00101567_1567`
4. View segments in the **Segments** tab
5. Click segment IDs to view volume + mesh rendering

### Option 2: Run neuroglass-research Locally

Follow the development setup instructions in the [neuroglass-research README](applications/neuroglass-research/README.md):

Then add your precomputed datasource:
- **URL**: `http://localhost:8080/precomputed/VFB_00101567_1567`
- **Type**: `segmentation`

## Loading All 3 Datasets

### Method 1: Use Generated State File (Recommended)

The mesh generation script creates a pre-configured Neuroglancer state with all 3 layers:

1. Open neuroglass.io or your local instance
2. Click the **`< >`** button (JSON state) in the top-right
3. Copy and paste the contents:
```bash
cat ~/precomputed/neuroglancer_state.json
```
4. Click **Apply** or close the editor

**Result:** All 3 datasets load as separate layers with segments pre-selected. Each layer can be changed to an "Image"

## Complete Example

```bash
# One-shot test with all 3 datasets
python3 convert_nrrd_to_precomputed.py \
--input-url "http://v2.virtualflybrain.org/data/VFB/i/0010/1567/VFB_00101567/volume.nrrd" \
--input-url "http://v2.virtualflybrain.org/data/VFB/i/0010/12vj/VFB_00101567/volume.nrrd" \
--input-url "http://v2.virtualflybrain.org/data/VFB/i/0010/101b/VFB_00101567/volume.nrrd" \
--output-path "file://~/precomputed" \
--verbose

python3 generate_and_setup_meshes.py \
--input-path "file://~/precomputed/VFB_00101567_1567" \
--input-path "file://~/precomputed/VFB_00101567_12vj" \
--input-path "file://~/precomputed/VFB_00101567_101b" \
--verbose

cd ~
npx http-server . -p 8080 --cors

# Then load the generated state:
cat ~/precomputed/neuroglancer_state.json
# Copy and paste into neuroglass.io
```

## What Gets Created

```
~/precomputed/
├── VFB_00101567_1567/
│ ├── info # Main metadata
│ ├── 0/ # Volume data chunks
│ ├── mesh/ # 3D surface meshes
│ │ ├── info
│ │ ├── 1:0, 1:0:1.gz # Segment meshes
│ │ └── ...
│ └── segment_properties/ # Segment IDs list
│ └── info
├── VFB_00101567_12vj/ # Dataset 2 (same structure)
├── VFB_00101567_101b/ # Dataset 3 (same structure)
└── neuroglancer_state.json # Pre-configured state with all 3 layers
```
181 changes: 181 additions & 0 deletions applications/virtual-fly-brain/demos/convert_nrrd_to_precomputer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
#!/usr/bin/env python3
"""
Minimal converter: NRRD -> Neuroglancer precomputed (volumetric only).
Fixed to use correct scale keys with optional compression.
"""
from __future__ import annotations
import argparse
import os
import tempfile
import requests
import nrrd
import numpy as np
import json
import hashlib
from cloudvolume import CloudVolume
from urllib.parse import urlparse
from pathlib import Path

def download_to_temp(url: str) -> str:
if url.startswith('file://'):
return url. replace('file://', '')
if os.path.exists(url):
return url
r = requests.get(url, stream=True)
r.raise_for_status()
fd, tmp = tempfile.mkstemp(suffix='.nrrd')
os.close(fd)
with open(tmp, 'wb') as f:
for chunk in r.iter_content(1024*1024):
f.write(chunk)
return tmp

def sanitize_name(name: str) -> str:
for c in [' ', '/', '\\', ':', '?', '&', '=', '%']:
name = name.replace(c, '_')
return name

def dataset_name_from_source(src: str, local_path: str) -> str:
try:
if src.startswith(('http://','https://','s3://','gs://')):
parsed = urlparse(src)
parts = [p for p in parsed.path.rstrip('/'). split('/') if p]
if len(parts) >= 3:
return sanitize_name(f"{parts[-2]}_{parts[-3]}")
if len(parts) >= 2:
return sanitize_name(parts[-2])
parent = os.path.basename(os. path.dirname(local_path))
grand = os.path. basename(os.path.dirname(os. path.dirname(local_path)))
if parent and grand:
return sanitize_name(f"{parent}_{grand}")
if parent:
return sanitize_name(parent)
except Exception:
pass
fname = os.path.splitext(os.path.basename(local_path))[0]
short = hashlib.md5(src.encode('utf-8')). hexdigest()[:6]
return sanitize_name(f"{fname}_{short}")

def detect_spacing(header: dict):
if 'space directions' in header and header['space directions'] is not None:
try:
dirs = header['space directions']
spacings = [float(np.linalg.norm(d)) if d is not None else 1.0 for d in dirs]
return spacings[::-1]
except Exception:
pass
if 'spacings' in header:
try:
sp = header['spacings']
return list(map(float, sp))[::-1]
except Exception:
pass
return [1.0, 1.0, 1.0]

def main():
p = argparse.ArgumentParser()
p.add_argument("--input-url", action='append', required=True, help="NRRD URL or local path (repeatable)")
p.add_argument("--output-path", default=None, help="precomputed root: file:///full/path or gs:// / s3:// ; default file://~/precomputed")
p.add_argument("--verbose", action='store_true')
p. add_argument("--no-compress", action='store_true', help="Disable gzip compression (store raw uncompressed chunks)")
args = p.parse_args()

compress = not args.no_compress

if args.output_path is None:
home = str(Path.home())
args. output_path = f"file://{home}/precomputed"

if args.output_path. startswith('file://'):
pth = args.output_path.replace('file://','')
pth = os.path. expanduser(pth)
args.output_path = 'file://' + pth

out_root_local = None
if args.output_path. startswith('file://'):
out_root_local = args. output_path. replace('file://','')

os.makedirs(out_root_local, exist_ok=True) if out_root_local else None

mapping = {}

for src in args.input_url:
if args.verbose:
print("Processing:", src)
local_path = download_to_temp(src)
data, header = nrrd. read(local_path)
if data. ndim != 3:
raise RuntimeError(f"Only 3D volumes supported (got ndim={data.ndim}) for {src}")

# Transpose from ZYX (NRRD) to XYZ (Neuroglancer)
arr_xyz = np.transpose(data, (2,1,0)). copy()
dtype_str = str(np.dtype(arr_xyz.dtype).name)
voxel_size = detect_spacing(header)
ds_name = dataset_name_from_source(src, local_path)
dest = args.output_path.rstrip('/') + '/' + ds_name

if args.verbose:
print("Writing dataset:", dest, "shape(XYZ):", arr_xyz.shape, "dtype:", dtype_str, "voxel_size:", voxel_size)

# Determine layer type
is_segmentation = np.issubdtype(arr_xyz.dtype, np. integer)
layer_type = 'segmentation' if is_segmentation else 'image'

# For uint8/uint16, use raw encoding
encoding = 'raw'
compressed_segmentation_block_size = None

# Create info with explicit scale key = "0"
info = {
"data_type": dtype_str,
"num_channels": 1,
"scales": [{
"chunk_sizes": [[64, 64, 64]],
"encoding": encoding,
"key": "0", # Important: must be "0" not the resolution
"resolution": voxel_size,
"size": list(arr_xyz.shape),
"voxel_offset": [0, 0, 0]
}],
"type": layer_type
}

if compressed_segmentation_block_size:
info["compressed_segmentation_block_size"] = compressed_segmentation_block_size

# Create CloudVolume with the info
vol = CloudVolume(dest, mip=0, info=info, compress=compress)
vol.commit_info()

# Write the data
vol[:, :, :] = arr_xyz

if args.verbose:
print(f"Successfully wrote {ds_name}")
print(f" Encoding: {encoding}")
print(f" Compression: {'gzip (. gz files)' if compress else 'none (raw files)'}")
print(f" Scale key: 0")

mapping[src] = ds_name

# Clean up temp file
if src.startswith('http'):
try:
os.remove(local_path)
except Exception:
pass

if out_root_local:
mapping_path = os.path.join(out_root_local, 'sources_to_dataset.json')
with open(mapping_path, 'w') as f:
json.dump(mapping, f, indent=2)
if args.verbose:
print("Wrote mapping:", mapping_path)

print("Done. Datasets written to:", args.output_path)
print("Mapping (source -> dataset):")
for k,v in mapping. items():
print(" ", k, "->", v)

if __name__ == "__main__":
main()
Loading