]> piware.de Git - handwriting-recognition.git/log
handwriting-recognition.git
3 years agoAdd reLU layer implementation master
Martin Pitt [Sun, 30 Aug 2020 08:20:14 +0000 (10:20 +0200)]
Add reLU layer implementation

This requires normalizing the input data to [0,1], otherwise the data
gets wildly out of range. But normalizing the input range makes Sigmoid
worse, so don't do this by default.

Even with normalization, reLU still performs slightly worse than
Sigmoid, though.

3 years agoUse linearly falling learning rate
Martin Pitt [Sun, 30 Aug 2020 08:19:10 +0000 (10:19 +0200)]
Use linearly falling learning rate

This makes big strides on the completely untrained network, but treads
more carefully later on. Prevents overshooting with reLU, and even
slightly improves Sigmoid.

3 years agoSimplify code
Martin Pitt [Sun, 30 Aug 2020 08:03:25 +0000 (10:03 +0200)]
Simplify code

Eliminate unnecessary persistent variables and initializations.

3 years agoTweak network structure
Martin Pitt [Sat, 29 Aug 2020 19:59:56 +0000 (21:59 +0200)]
Tweak network structure

Increase first layer size, otherwise the network does not "see" enough
detail in the input.  Drop second layer, it makes the recognition rate
actually much worse.

3 years agoProcess many images in parallel
Martin Pitt [Sat, 29 Aug 2020 19:29:39 +0000 (21:29 +0200)]
Process many images in parallel

Provide one object per NN layer and implement their functionality
separately, like in
https://www.kdnuggets.com/2019/08/numpy-neural-networks-computational-graphs.html

Each layer does not take only one image vector, but a whole 10,000 of
them, which massively speeds up the computation -- much less time spent
in Python iterations.

3 years agoAdd backpropagation batching
Martin Pitt [Sat, 29 Aug 2020 12:58:24 +0000 (14:58 +0200)]
Add backpropagation batching

This will eventually speed up learning, once it gets run with random
subsets of the training data.

3 years agoAdd backpropagation and first round of learning
Martin Pitt [Sat, 29 Aug 2020 11:31:25 +0000 (13:31 +0200)]
Add backpropagation and first round of learning

3 years agoMove forward feeding to sigmoid
Martin Pitt [Sat, 29 Aug 2020 13:01:24 +0000 (15:01 +0200)]
Move forward feeding to sigmoid

Let's comare it to reLU later on.

3 years agoInitial Neural network with forward feeding
Martin Pitt [Sat, 29 Aug 2020 10:48:59 +0000 (12:48 +0200)]
Initial Neural network with forward feeding

Two hidden layers with parametrizable size. Two possible transfer
functions, defaulting to reLU for now.

Initialize weights and biases randomly. This gives totally random
classifications of course, but at least makes sure that the data
structures and computations work.

Also already add a function to recognize the test images and count
correct ones. Without trainingh, 10% of the samples are expected to be
right by pure chance.

3 years agoRearange image vector
Martin Pitt [Sat, 29 Aug 2020 05:45:10 +0000 (07:45 +0200)]
Rearange image vector

Put each image into a column instead of a row, which works much better
with the standard formulation of backpropagation algorithms.

3 years agoRead MNIST db into numpy arrays, display
Martin Pitt [Fri, 28 Aug 2020 06:18:29 +0000 (08:18 +0200)]
Read MNIST db into numpy arrays, display

3 years agoAdd script to download MNIST digit database
Martin Pitt [Fri, 28 Aug 2020 04:46:51 +0000 (06:46 +0200)]
Add script to download MNIST digit database

3 years agoREADME.md: First steps and resources
Martin Pitt [Fri, 28 Aug 2020 05:37:04 +0000 (07:37 +0200)]
README.md: First steps and resources