Use linearly falling learning rate
authorMartin Pitt <martin@piware.de>
Sun, 30 Aug 2020 08:19:10 +0000 (10:19 +0200)
committerMartin Pitt <martin@piware.de>
Sun, 30 Aug 2020 09:40:28 +0000 (11:40 +0200)
commit59f4fd752941f39ddcd36760202a0dc742747106
tree937e754f524a50839754ef7bea0456a84c7a6d10
parent0b40285a04bfbf2d73f7a7154eacb4613f08b350
Use linearly falling learning rate

This makes big strides on the completely untrained network, but treads
more carefully later on. Prevents overshooting with reLU, and even
slightly improves Sigmoid.
train.py