-
Notifications
You must be signed in to change notification settings - Fork 266
Description
@samuelgarcia @PeterNSteinmetz
Hello - as a follow up to the fix provided by PR #957 which resolved the basic access issue raised in Issue #959, I am noticing behavior similar to old Issue #735.
Simple summary: I'm trying to utilize the single-file mode for Neuralynx through spikeextractors by making a list of NeurlynxRecordingExtractors for each file (safer than using the dirname directly) as follows
test = [NeuralynxRecordingExtractor(filename=filename) for filename in neuralynx_files]
with neuralynx_files being a list of several .ncs file paths. While this works fine from a purely functional standpoint (all data and metadata is readily reported and appears accurate), I noticed that the more files I included in neuralynx_files, the slower the loading was and the more RAM was being used. On the upper end (64 files, up to 1 GB each), my own system would not be able to load in all the files I need to use.
Via a memory profiler I was able to track the memory usage through the RawNeoIO in spikeextractors to the parse_header() command in neo itself, and from there down to the call to scan_ncs_files(), which appears to be loading in all the data for that .ncs file rather than utilizing lazy access through memory maps and such.
Is this intended, or a bug caused by the new filename feature?