6.034 Artificial Intelligence - Recitations, fall 2004 online slides on learning

Next: Example Previous: Hidden-to-output Weights

Next Layer

For input-to-hidden weights, we need to define a value analogous to the error term for output nodes. Here we perform error backpropagation. Each hidden node is assigned a portion of the error corresponding to contribution to the output node.

First,

\(\Delta_j \;=\; g'(in_j) \Sigma_i w_{ji} \Delta_i\)

assigns a portion of the responsibility to node j.

The proportion is determined by the weight from j to all output nodes. Now we can give the weight update rule for links from input to hidden nodes.

\(w_{kj} \;=\; w_{kj} + (\eta * I_k * \Delta_j)\), for each input node k to hidden node j.