Mapping From Tensors to Network Representations
In order to simulate memory processes within the tensor model of memory it is necessary to convert the tensor representation of memories into network representations. The rank of the tensor representation determines the structure of the network representation, including: the number of layers in the network; the number of units in each layer; the type of units; and the way the units are connected together.
Rank 1 Tensors
A tensor representation of rank 1 maps to a 2 layer network (one input and one output layer) as depicted in Figure 1. The number of units in the input layer corresponds to the number of dimensions in the original vector, while the output layer contains only 1 unit. Each input unit is connected to each output unit, and all the units are linear, which means that they simply add together the activations they receive.
fig 1
Rank 2 Tensors
A tensor representation of rank 2 maps to a 2 layer network (one input and one output layer) as depicted in Figure 2. The number of input units corresponds to the number of rows in the original matrix, while the number of output units corresponds to the number of columns (or vice versa). Each input unit is connected to each output unit, and all the units are linear.
fig 2
Rank 3 Tensors
A tensor representation of rank 3 maps to a 3 layer network (one input layer with two sets of units, one output layer, and one layer of hidden units) as depicted in Figure 3. The number of units in each of the input sets corresponds to the number of rows and the number of columns in the original tensor, respectively, while the number of output units corresponds to the number of ?levels? in the original tensor. The number of hidden units corresponds to the number of units in one input set times the number of units in the other input set. Each hidden unit has a connection from one input unit from each input set, with a hidden unit existing for each possible combination. Each hidden unit is then connected to each output unit. All the input units and output units are linear, while the hidden units are sigma pi units. Sigma pi units multiply together the activations they receive, rather than adding them.
fig 3