*** How FP in general and Haskell in particular can help in modeling memory with neuron-like structures? ***<br><br>The particular model discussed here - latter in this text I call it Memory Tree - is a type of neural network (NN) where the network consists of a collection of data processing nodes arranged in a tree-shaped hierarchy.<br>
Memory Tree (MT) is a hierarchical classifier, that infers recurring data patterns observed in its input during some period of time T. Time period is a set of discreet times (or moments) T = [t1, t2, ...tN]. <br><br>Each node reads inputs from all its children and then builds from these inputs a single input vector as a concatenation of children outputs. Node then classifies this newly built input vector assigning it to one of the category groups that node creates. Next node outputs group number that current input belongs to. This output goes to the node parent.&nbsp; <br>
All nodes at all all levels do these steps in turn, inputs flowing from bottom level nodes up to a top node through all nodes at all intermidiate levels.<br>Next I illustrate main ideas of MT with a simple example.<br><br>
*** Simple MT Example <br><br>Primitive MT classifier, used in this example, will (I hope :) discern, for example, these images:<br><br>1) <br>########<br>########<br>########<br>########<br><br>2)<br>########<br>#......#<br>
#......#<br>########<br><br>3)<br>###..###<br>#.####.#<br>#.####.#<br>###..###<br><br>(&#39;#&#39; - black dot, &#39;.&#39; - white dot)<br><br>In my case at each moment MT uses bottom-level nodes to read input from a data file.<br>
Input data is a collection of strings. Each string is a binary vector representing data read by some sensor at time tj.<br><br>For example:<br><br>v1(t1) : 11111111111111111111111111111111<br>v2(t2) : 11111111100000011000000111111111<br>
v3(t3) : 11100111101111011011110111100111<br>...<br>...<br>vN(tN) : ...<br><br>In this example input is 32 bit vector read at consecutive times t1 &lt; t2 &lt; t3 ... tN<br>This vector represents a rectangular view of a camera - 8 x 4 matrix of black (1) and white (0) dots. Shown above images are encoded in 32 bit vectors, one image - one vector, where black dot &#39;#&#39; becomes 1 and white dot &#39;.&#39; becomes 0.<br>
<br>In this case MT may have the following topology:<br><br>-- One Top Level Node: T<br>-- Two Middle Level Nodes: M1, M2<br>-- Four Bottom Level Nodes: B1,B2,B3,B4<br>-- Input Vectors: v1(t1), v2(t2), v3(t3), ... vN(tN)<br>
<br>To show how nodes in this tree are connected I will use brackets. In my &#39;bracket notation&#39; parent node name is followed by its child node names in brackets:<br><br>(T (M1 (B1, B2), M2(B3, B4)))<br><br>This describes topology where: <br>
-- T node has two children: M1 and M2<br>-- M1 node has two children: B1 and B2&nbsp; <br>-- M2 node has two children: B3 and B4&nbsp; <br><br>(Sorry for poor illustrations, unfortunately I can&#39;t use graphics are supported in this post, including ascii art :)<br>
<br>For this topology each input vector, for example v1(t2), may be split in four equal parts of 8 bits. Chunks will then represent four regions of the camera image.<br><br>Each 8 bit chunk is sent as input to the corresponding Bottom Level nodes: B1,B2,B3,B4<br>
<br>v2(t2)_b31-b24 : 11111111 --&gt; B1<br>v2(t2)_b23-b16 : 10000001 --&gt; B2<br>v2(t2)_b15-b8&nbsp; : 10000001 --&gt; B3<br>v1(t1)_b7-b0&nbsp;&nbsp; : 11111111 --&gt; B4<br><br>To work with these vectors each Bottom node will have dimension of input vector BK = 8 so it can classify 2 ** 8 = 256 input vectors.<br>
Let&#39;s say Bottom node can classify these 256 input vectors into 2 ** 3 = 8 output groups, so it will have dimension of output vector BL = 3. In turn, each M1 and M2 node will have input dimension equal to the sum of all its children (B nodes) output dimensions: <br>
<br>M1K = B1L + B2L = 3 + 3 = 6<br>M2K = B3L + B4L = 3 + 3 = 6<br><br>Our M nodes can classify 2 ** 6 = 64 vectors, and we assume that they can categorize these 64 vectors in 2 ** 2 = 4 groups, so M node output vector is 2 bit long:<br>
<br>ML = 2<br><br>Connecting two M nodes to a single top node T will requre T input dimension equal to the sum of two M node output dimensions:<br><br>TK = M1L + M2L = 2 + 2= 4 <br><br>T node can classify 2 ** 4 = 16 vectors, and we assume T node can group these vectors in 2 groups, so T node output vector is 1 bit long:<br>
<br>TL = 1<br><br>&nbsp;<br>To summarize: <br><br>*** Node Attributes<br><br>Each node in this tree has the following attributes:<br>1) K-bit input vector.<br>2) L-bit output vector, where&nbsp; K &gt;= L<br>3) Node Category Memory (CM) - stores unique category vectors. Each vector is stored as a tuple (V, C) where V is a vector value and C is a category number. Both V and C values, as well as (V,C) pairs are unique for the node. Node can store a limited amount of (V,C) pairs. This amount (CMSize) is equal to the number of categories that node can differentiate and also equal to the node output vector dimension.<br>
<br>*** Node Data Processing algorithm:<br><br>Every Node runs the following endless loop:<br>1) Read inputs from children<br>2) Build input vector X <br>3) Calculate distance between X and every vector in CM and determine if X belongs to any existing node category.&nbsp;&nbsp; <br>
If X belongs to existing category <br>&nbsp;&nbsp; Then do step 4) <br>&nbsp;&nbsp; Else <br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; If CM is not full create new node category pair (X, C_new) <br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Else return<br>4) Output&nbsp; vector group number to a higher level node in the tree. <br>
<br>*** Main Questions<br><br>1) How to model node vector processing and node state? Node receives input and produces output and can change its state by adding new category to Node CM. Will State monad help to &#39;thread&#39; node state between inputs? I don&#39;t see how.<br>
State monad encapsulates state transition function:<br><br>State s a = State (\s -&gt; a, s)&nbsp; <br><br>that inputs some state value and outputs a pair (value, newState). <br>In my case node category memory (CM) could be such a changing state of node , but what then will be node input vector in State monad?<br>
<br>2) How to traverse classifier tree (MT) so inputs from bottom level nodes could flow up to a top node through all nodes at all intermediate levels? In contrast to a &#39;normal&#39; tree where&nbsp; all variations of tree traversals start at root node, classifier tree must be traversed starting from bottom level nodes.<br>
<br>3) How to build tools that will allow to construct classifier trees with different topologies? In this case one topology differers from another by a) number of levels and b) numer of children at each level. <br>Ideally I would like to have a simple DSL that will allow specify tree topology declaratively, easy to understand for non-programmer.<br>
<br>4) Most important: In what direction should I look as a Haskell newbie to find answers to these and similar questions? Should I build my own monad to traverse this tree, what other standard monads may help? What combinator libraries should I learn? Will I need Arrows (that at the moment I know only name and nothing more about)? <br>
<br>Thanks for reading all this and any help!<br><br>-- <br>
Dmitri O. Kondratiev<br><a href="mailto:dokondr@gmail.com" target="_blank">dokondr@gmail.com</a><br>
<a href="http://www.geocities.com/dkondr" target="_blank">http://www.geocities.com/dkondr</a>