Next: Hidden-to-output Weights
Previous: Learning in a Multilayer
Weight update for hidden-to-output links:
, where
= weight of link from node j to node i
= learning rate, eta, or sometimes denoted "r", for learning rate
= activation of node j
(i.e., what the node 'outputs')
If hidden node, this is . If input to input node, this is .
= Error at this node (target output minus actual output,
how much we need to change ALL the weights to node i)
= derivative of the transfer function g
Let = * represent the error term.
|