Skip to content

Negative Array Size exception (Fiji/Windows x64) #6

@msymeonides

Description

@msymeonides

I am trying to run 3D Denoising (Tribolium) in Fiji on Windows 10 x64. It works fine (actually works beautifully) on a small dataset (500 x 900 x 45, 16-bit, 39 MB), but fails on a much larger one (12900 x 2048 x 116, 16-bit, 5.7 GB) with the following error:

[Thu Jun 07 11:12:16 EDT 2018] [ERROR] [] Module threw exception
java.lang.NegativeArraySizeException
	at mpicbg.csbd.normalize.PercentileNormalizer.percentiles(PercentileNormalizer.java:113)
	at mpicbg.csbd.normalize.PercentileNormalizer.prepareNormalization(PercentileNormalizer.java:99)
	at mpicbg.csbd.commands.CSBDeepCommand.normalizeInput(CSBDeepCommand.java:303)
	at mpicbg.csbd.commands.CSBDeepCommand.runInternal(CSBDeepCommand.java:257)
	at mpicbg.csbd.commands.CSBDeepCommand.run(CSBDeepCommand.java:241)
	at mpicbg.csbd.commands.NetTribolium.run(NetTribolium.java:100)
	at org.scijava.command.CommandModule.run(CommandModule.java:199)
	at org.scijava.module.ModuleRunner.run(ModuleRunner.java:168)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:127)
	at org.scijava.module.ModuleRunner.call(ModuleRunner.java:66)
	at org.scijava.thread.DefaultThreadService$3.call(DefaultThreadService.java:238)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)

I have tried tweaking the number of tiles, anywhere from 1 up to 128, but always get the same exception. The overlap is set to 32. I am running Java 1.8.0_161 and Fiji is updated (1.52c). The PC is a dual Xeon with 128 GB RAM and the data is hosted on an SSD RAID.

Can't think of what is different about the data other than the size. It was acquired with the same instrument and almost identical settings (only real difference is the voxel depth in the working dataset is 1.5 um while in the failing dataset it is 2 um), and the SNR is pretty similar in both datasets, that is, they are both very noisy, I'm just looking at nuclei (15 or so in the working dataset, several thousand in the failing one).

I cropped the failing dataset down to 1000 x 1000 x 45, 86 MB, and it works...

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions