-
Notifications
You must be signed in to change notification settings - Fork 60
Open
Description
Hi Oren,
I am training my 3GB corpus. I am doing it on the clusters that have 27 GB memory limitation. I encounter :
cupy.cuda.memory.OutOfMemoryError.
Is it possible in some way to limit the memory that code uses? Or split the corpus file and do the training in steps? Or changing some arguments to use less memory?
Thanks.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels