]> piware.de Git - handwriting-recognition.git/log
handwriting-recognition.git
4 years agoUse linearly falling learning rate
Martin Pitt [Sun, 30 Aug 2020 08:19:10 +0000 (10:19 +0200)]
Use linearly falling learning rate

This makes big strides on the completely untrained network, but treads
more carefully later on. Prevents overshooting with reLU, and even
slightly improves Sigmoid.

4 years agoSimplify code
Martin Pitt [Sun, 30 Aug 2020 08:03:25 +0000 (10:03 +0200)]
Simplify code

Eliminate unnecessary persistent variables and initializations.

4 years agoTweak network structure
Martin Pitt [Sat, 29 Aug 2020 19:59:56 +0000 (21:59 +0200)]
Tweak network structure

Increase first layer size, otherwise the network does not "see" enough
detail in the input.  Drop second layer, it makes the recognition rate
actually much worse.

4 years agoProcess many images in parallel
Martin Pitt [Sat, 29 Aug 2020 19:29:39 +0000 (21:29 +0200)]
Process many images in parallel

Provide one object per NN layer and implement their functionality
separately, like in
https://www.kdnuggets.com/2019/08/numpy-neural-networks-computational-graphs.html

Each layer does not take only one image vector, but a whole 10,000 of
them, which massively speeds up the computation -- much less time spent
in Python iterations.

4 years agoAdd backpropagation batching
Martin Pitt [Sat, 29 Aug 2020 12:58:24 +0000 (14:58 +0200)]
Add backpropagation batching

This will eventually speed up learning, once it gets run with random
subsets of the training data.

4 years agoAdd backpropagation and first round of learning
Martin Pitt [Sat, 29 Aug 2020 11:31:25 +0000 (13:31 +0200)]
Add backpropagation and first round of learning

4 years agoMove forward feeding to sigmoid
Martin Pitt [Sat, 29 Aug 2020 13:01:24 +0000 (15:01 +0200)]
Move forward feeding to sigmoid

Let's comare it to reLU later on.

4 years agoInitial Neural network with forward feeding
Martin Pitt [Sat, 29 Aug 2020 10:48:59 +0000 (12:48 +0200)]
Initial Neural network with forward feeding

Two hidden layers with parametrizable size. Two possible transfer
functions, defaulting to reLU for now.

Initialize weights and biases randomly. This gives totally random
classifications of course, but at least makes sure that the data
structures and computations work.

Also already add a function to recognize the test images and count
correct ones. Without trainingh, 10% of the samples are expected to be
right by pure chance.

4 years agoRearange image vector
Martin Pitt [Sat, 29 Aug 2020 05:45:10 +0000 (07:45 +0200)]
Rearange image vector

Put each image into a column instead of a row, which works much better
with the standard formulation of backpropagation algorithms.

4 years agoRead MNIST db into numpy arrays, display
Martin Pitt [Fri, 28 Aug 2020 06:18:29 +0000 (08:18 +0200)]
Read MNIST db into numpy arrays, display

4 years agoAdd script to download MNIST digit database
Martin Pitt [Fri, 28 Aug 2020 04:46:51 +0000 (06:46 +0200)]
Add script to download MNIST digit database

4 years agoREADME.md: First steps and resources
Martin Pitt [Fri, 28 Aug 2020 05:37:04 +0000 (07:37 +0200)]
README.md: First steps and resources