Skip to content

Latest commit

 

History

History
4 lines (4 loc) · 586 Bytes

File metadata and controls

4 lines (4 loc) · 586 Bytes

Coding-step-size-optimizers

Manually hand-coding step-size optimizers (momentum and Adam) for deep learning neural networks. This was achieved through further modification of the logic used in stochastic gradient descent.

Project Description

Project contains a ComputationalGraphPrimer file that contains all the classes required for training model through stochastic gradient descent (SGD). The 2 other files for single-neuron classifier and multi-neuron classifier contain classes with functions overwritten in order to achieve stochastic gradient descent with momentum and Adam.