>One of the problems I have been thinking about for a number of years is how
>the learning process works. I'm sure a lot of people have.
This is the sixty-four thousand dollar question. Let me fill you in on what
I know. Bear in mind that what I describe is learning at the single-cell
system. There are higher order effects of learning in the nervous system,
as well. Check out the works of Carl Lashley for higher order explanations.
Despite the difficulties of studying single-cell mechanisms in the mammalian
nervous system, one procedure has proved useful for identifying possible
mechanisms of learning. A burst of many stimuli to certain neurons of the
hippocampus within less than 1 second changes the properties of those
neurons for a period of weeks. During that period, stimulation via the
synaptic transmitter glutamate produces a greater response than usual. That
increase in response is known as long-term potentiation, abbreviated LTP.
Although the route from LTP to learned behavior is not yet known, LTP does
offer one possible single-cell solution. LTP is thought to work like this
(warning technical section). The burst of stimulation to the hippocampus
increases the concentration of calcium in the dendrites of the postsynaptic
neuron (calcium is necessary for neurotransmitter release). The increased
level of calcium activates a protein known as calpain, which breaks a
network of molecules normally found in the dendrites. With that network
broken, the dendrites change their shape and expose additional glutamate
receptors. Because of the additional receptors, the cell becomes more
sensitive to the excitatory transmitter glutamate. Elevated levels of
calcium or the synaptic transmitter norepinephrine in the hippocampus
facilitate LTP. Low levels of calcium can block LTP.
Three learning-related phenomena should be expecially applicable to your
"web neuron" model.
1.) Repetitive stimulation at any point in the system leads to a decline in
response (habituation).
2.) A strong stimulus leads to a prolonged facilitation of response
(sensitization).
3.) Pairing a weaker stimulus with a stronger stimulus increases the later
effectiveness of the weaker stimulus (classical conditioning).
>I think the basic principles are correct with the Web neuron idea, which
>could in fact be applied at any level of computing, even hardware, I believe.
>
>There are the the four ingredients, inputs, outputs, processing, and
>linking, that appear fundamentally necessary and correct. It is strange that
>the only one of these missing from the start of computing was the linking;
>because conventional subroutines have i/o and processing. But now on the Net
>we have all the linking we can imagine - but the other three have
>disappeared!!! (or been separated)
>
>Anyway, even those ingredients don't automatically explain learning. I am
>interested in your point about the foetus and all the "loser" and "winner"
>cells. What exactly is going on? Why does one die and not another? Indeed
>why do any die at all. What sort of structures are forming and how.
Exuberant proliferation of cells followed by selective cell death seems to
be a standard mode of operation in neuronal development. Depending on the
neuronal structure, between 15% and 85% of the original neuronal pool may be
eliminated in selective cell death. In one case (the chick) a whole
neuronal tract, projecting from the visual cortex to the spinal cord, grows
and then dies. The principle of selection is not yet understood, but
functional participation is believed to be important. It may be that
functionally busy neurons receive more nourishment than the functionally
more idle and hence become more robust, a variation on the "them that works,
eats" rule.
My personal take on this situation is one of selective preparedness of
neuronal death at early stages of development. It is far better to sustain
brain injury at an early age due to the over abundance of possible
replacements. The infant monkey clinging to its mother's belly might loose
it's grip and fall, thus sustaining some sort of damage, but the extra
neuronal pool is there to compensate. Evolution always seems to have
survivability back-up systems.
>What are the common ways of linking within small groups of neurons. Is there
>any looping going on (like a while next loop say in programming). Also, how
>do cells remember - is it by permanently changing the way inputs convert to
>outputs, as in the "weighting functions" of neural-nets, or by
>making/breaking links, or both. Can old people still make new links?
Yes, there are loops called autoreceptors. These are situated on the
presynaptic axon button and are activated by the very neurotransmitters that
are released by the presynaptic neuron. Thus, regulating the amount of
transmitter release. That is, they serve a function of negative feedback;
when a cell releases a large amount of the transmitter, the transmitter
activates the autoreceptors to inhibit further release.
You must bear in mind that neurons can be excitatory or inhibitory, and
combinations of both occur at the pre- and postsynaptic terminals. Thus, an
inhibitory input at the presynaptic terminal will decrease the chance of a
stimulated neuron releasing its transmitter package. Temporal and spatial
summation of input may occur to cause axonal output, as well. I suggest
looking up the work of Charles Scott Sherrington or Sir John Eccles for a
review of inhibitory and excitatory neuronal potentiation.
Any other questions, feel free to ask. When you become famous, just let me
in on the stock options :)
Chat later,
Jeff
------------------------------------------------------------------------
Jeffrey N. Browndyke
Ph.D. Candidate in Medical/Clinical Psychology
Louisiana State University Email: cogito@premier.net
Department of Psychology Fax: (504) 388-4125
236 Audubon Hall URL: http://www.premier.net/~cogito
Baton Rouge, LA. 70803
Neuropsychology Central - http://www.premier.net/~cogito/neuropsy.html
Psycresearch-online Mailgroup - psycresearch-online@premier.net
------------------------------------------------------------------------
|