artificial intelligence - Hebbian learning -
i have asked question on hebbian learning before, , guess got answer accepted, but, problem realize i've mistaken hebbian learning completely, , i'm bit confused.
so, please explain how can useful, , for? because way wikipedia , other pages describe - doesn't make sense! why want keep increasing weight between input , output neuron if fire together? kind of problems can used solve, because when simulate in head, can't basic and, or, , other operations (say initialize weights @ zero, output neurons never fire, , weights never increased!)
your question seems rather theory-related, , i'm not sure if belongs so, since directly connected neural networks, i'll try answer.
we increase weight between input , output neurons if fire because firing means somehow related.
let's use example of logic functions. in and function, have 2 input neurons. if input data (0, 0), means neither input neuron fire, , neither output. don't need strong connections in case.
now take input (1, 1). both input neurons fire, , output. in order learn correspondence, network should increase weights connecting input , output (remember, matter of summing inputs , bias neuron).
finally, when ouput (1, 0) or (0, 1), since output 0, tells network neither connection should strong enough activate output neuron on own.
i hope makes sense.
Comments
Post a Comment