-
|
I'm using the Binary Ninja API to serially disassemble and process a large number of binaries. My code looks something like the following: import binaryninja
for b in binaries:
view = binaryninja.load(source=b)
results = process_binary(view)
write_to_disk(results)I have noticed that memory usage of the above grows unbounded until the process eventually OOMs. Some light memory profiling leads me to believe that the native portion of Binary Ninja is to blame. At each iteration everything in Python goes out of scope and is presumably garbage collected. Moving the above code to a subprocess each iteration resolves the memory consumption issue - I assume each subprocess gets its own instance of the core. Unfortunately this has a significant performance cost: I've seen up to ~16x slower per iteration based on profiling. Is there some API I'm missing that will cause the Binary Ninja core to release/cleanup held memory? I suspect the subprocess approach is losing significant time in restarting the core each iteration. I have seen Any suggestions would be very helpful 🙂 |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
|
This is a common mistake. There is a circular reference that prevents cleanup. You should use the context manager: with binaryninja.load(source=b) as view:
results = process_binary(view)
write_to_disk(results)or for b in binaries:
view = binaryninja.load(source=b)
results = process_binary(view)
write_to_disk(results)
view.file.close()Check out the docs for details: https://docs.binary.ninja/dev/batch.html?h=close#our-first-script |
Beta Was this translation helpful? Give feedback.
This is a common mistake. There is a circular reference that prevents cleanup. You should use the context manager:
or
Check out the docs for details: https://docs.binary.ninja/dev/batch.html?h=close#our-first-script