User Tools

Site Tools

Table of Contents

Project Concept

Implement a convolutional neural network (CNN) that will be able to classify an input pattern (a 3×3 window on the MFM). It will use the layers based around a very basic CNN including: convolution layer and a fully-connected layer. This will have to start off static in the MFM, as passing data would be very difficult otherwise. This will require five elements that I will go into detail further down this page.

Elements

Neuron:

Will contain the weights from the pre-trained network that it will be using to classify the image

Deal with all of the computations for the convolution

Will update the neighboring pixels based on the output from the computation

Init_Layer:

This is the middle Neuron, which will initialize, reset, and repair all of the other Neurons in the specific filter

This will also contain its own weight, and bias which will be summed into the total from the filter

FC_Neuron:

Neurons for the fully-connected layer, which will used fixed point multiplication to calculate its values.

There will be four layers of 12, which will each represent a pattern to be classified

FC_Init_Layer:

Will initialize, reset, and repair the FC_Neurons in each layer

Label:

Will be used to classify the network once it is complete, meaning the FC_Layer_Twelve will create this once the the classification values are complete

Weekly Logs

Week 7 Update:

Now understand the concept of CNN better through messing with tiny_dnn

Found “final” design for blocks of CNN

Will now work on communication between the blocks in the CNN

Also will work on fixing a size for the CNN based around a smallish image

Week 8 Update:

Updated project page to more clearly state what my projects is and my goals

Messed around with trying to get my own images as inputs for tiny_dnn

Researched how small my input images could be for a CNN

Week 9 Update:

Created Demo video for progress on project

New Title: Object Classification in the Movable Feast Machine

Scaled back project a lot to 3×3 hand made images without padding. So they represent object as the pixel values can only be 0 or 1.

These 3×3 images can contain a dot (1) or a 2-length line (0).

I get a 80% successful classification rate in tiny_dnn with the new CNN structure and these input images

Researched into CNN structures and tested them in tiny_dnn with hand-made images of 9×9, 7×7, 5×5, 3×3, with padding and without

New CNN structure is a 12 layer Convolution layer, that goes into a Fully_connected layer that takes in 12 input and outputs 2 results.

Implemented working convolution layer in MFM, which takes pixels from a image element and will calculate the output from this layer

Created shell for Fully-Connected layer for MFM, contains weights and biases, but nothing else yet.

Scratched communication concept, as I can place all layer in a sort of column fashion, so there is no need for this

Week 10 Update:

Implemented the fully connected layer

Compared values with weights and results from each layer with the tiny-dnn output, they compared well

The results weren't exact, but I found a bug where the values were changing barely based on certain Neurons exceeding their stages and skipping certain steps

Found some major problems in the weights I was using after testing every single possible input pattern

Will find better weights in week 11

Forgot to put my week 10 update on week 10….

Week 11 Update:

Found better patterns and shapes to use together, am using a 2×2 box and a horizontal 2 length line

Will look into adding more objects that can still be successfully classified

Implemented these weights and biases into the project, and it classifies 8/10 possible patterns (6 from the horizontal 2 length line, and 4 from the 2×2 box)

Does not pull the image (pixels) from a element anymore, now can be hand drawn by the user

Tested hitting the network with “radiation”, and it failed horribly every time

Working towards getting a reset to work

The neurons in the convolution already had their weights stored separately from their output values, whil the fully connected layer did not

I have successfully separated the weights and the output values for the fully-connected layer, and it works perfectly

Added a reset element, as when the Neurons see it, they will reset the network

Created a presentation for this project

Week 12 Update:

Fixed bug with fully_connected layer when the simulation was going too fast for it to correctly pass the weights

Created scripts to get data from modifying parameters for training in tiny_dnn and then will compare the results to what I get from my pre-trained model

Added different objects, so It can classify up to 5 now, with a much worse percentage…

Wrote the abstract for this project

Week 13 Update:

Added more rotations and shifts of each shape, and this increased the classification rates for every number of classification patterns

Can no only classify 4, but way better percentages up by about 20-30% from the previous version

Also found a problem with the version I was using in tiny-dnn, but was able to fix it. Was having to do with the rescaling of the outputs

Created more plots with the improved numbers and places them in my paper

Added a lot more to my paper including introduction, methods, some results, and some sort of conclusion/discussion

Corrected my abstract

Fixed my CNN in the MFM to be more robust, where when a big chunk is removed it will repair itself, and then when a reset is used it can be re-run

Week 14 Update:

Found a few bugs in the repairing and was able to improve it

Redesigned fully-connected layer, to where it can classify four different patterns at any rotation

Box, two-length line, L shape, and a three-length line, I get a global max of 77.5% class rate

Went back to three as I was running into problems and I get consistent 85% classification rate

Improved paper, still need more plots for results and an improved discussion