-
Notifications
You must be signed in to change notification settings - Fork 356
Open
Description
The way that TF does checkpointing with:
tf.estimator.train_and_evaluate(nn, train_spec, eval_spec)
Seems to result in a lot of IO lag where it saves the params to disk after every epoch, runs validation, then loads the model again and repeats.
Is there an easier way to just keep this in memory (like other frameworks, e.g. PyTorch) and just save to disk once at the end?
For example running on pure numpy array:
nn.train(tf.estimator.inputs.numpy_input_fn(
fake_X,
fake_y,
shuffle=False,
num_epochs=EPOCHS,
batch_size=BATCHSIZE))
Takes 14min30s with TF and 16min52s with Keras. However, the train_and_evaluate loop takes 21min49s sec with TF and 20min16s with Keras.
Metadata
Metadata
Assignees
Labels
No labels