To: Jeffrey N. Browndyke
Thanks for the encouraging reply.
>This is a very interesting idea. But, if I understand you correctly
>the individual pages would work in an "all or none" fashion, very much
>like an action potential.
>One thing that should be considered is the
>graded potentials that neurons carry, as well. A neuron may be
>potentiated (i.e., brought closer or farther from firing) by
>inhibitory or excitatory neurons. Computers are inherently binary,
>and unless something like "fuzzy logic" is used. The "web neuron"
>idea will only be a discrete approximation of brain functioning. This
>doesn't mean that the use of the brain as a model of integrative web
>structure isn't a valuable and potentially fruitful idea. I'd like to
>hear more.
I am not an expert on real neurons, but I do mostly believe that we should
try to duplicate them as closely as possible until proved otherwise.
Unfortunately this is not the opinion of the computer establishment who have
been content for years to use completely non-neural stuff like C, BASIC etc
etc. To get to your point, which is a very good one I think, the answer is
really no, they do not have to operate in an "all or none" fashion.
I think I shouldn't have used the term "FIRE" - it is misleading. When one
Web page FIRE's a target page it also passes INPUT(s) to the target just
like a conventional subroutine/procedure call. INPUT's could be whatever you
liked e.g real numbers or integers which could be used to simulate the
variable (graded) pulse rate along axon inputs to real neurons. Only 1 input
is required for this but I included the option to have more because it is
easy to do so. This could be a mistake since it is not a true simulation. If
Web page A represents neuron A and FIRES Web page B (neuron B) then this
represents a single axon link between neurons A & B.
A real neuron will have probably many incoming and outgoing axons. For
example for neuron X, if there are say three incoming and two outgoing axons
with pulse levels (a, b, c) and (d, e) respectively, then this could be
modelled by 6 Web pages. At different times, three of these pages would each
FIRE page X (representing neuron X) from within their individual HTML.
page 1:
page 2:
page 3:
where a,b,c could be +ve integers
page X could then FIRE pages 5 and 6 as and when it wanted to. The levels of
d & e would be some function of a,b,c and possibly other historic FIRINGs
from Pages 1,2, &3. This is to do with your point about inhibition and
excitation.
page X:
page X:
Firing a page does not necessarily cause any change to its outputs. When the
server of page X, in the above example, receives the request to FIRE page X
from say page 1, it activates page X from RAM or disc (preferably RAM) and
passes the INPUT "a" to it. There is code within page X that then decides
what to do with this INPUT, which could be similar to the logic in real
neurons. The code may also have to know that page 1 was the instigator.
Whether "a" is inhibitory or excitatory will have been decided when the link
was created and recorded as status or tabular information say on the end of
the HTML. After its code is run, page X goes back to sleep, waiting to be
fired again.
There seem to me to be two things going on regarding excitation and
inhibition in neurons. Firstly, an output may not trigger unless say the
average level of all the inputs is above a threshold, this is time
independant. Secondly, if there is a short burst of activity from any input,
this may make a later burst from the same or other inputs more, or less,
likely to cause output.
These can both be handled by saving time and status/tabled information on
the end of the HTML each time the page goes to sleep (and having the right
processing code). At the next firing, the code will know the history so can
decide exactly what to do.
I am sure that standard coding methods could quickly evolve so you wouldn't
have to keep writing it all out every time you did a new page. Perhaps some
of it would be automatically handled by the server (outside the HTML!!).
I haven't touched much on dynamic linking where new links are made or links
broken. This is obviously important e.g. a lot of this goes on in the foetus
and when you have a few beers. It may be important in the learning process
but I think current neural-networks learn by modifying the state (weighting
functions) within each node rather than making new links. Both strategies
can be covered by the Web neuron.
I think I have shown that virtually every situation in a real neural plexus
can be accurately modelled by the Web neuron system. In addition other
things can be done which the brain can't, such as passing pictures, text,
etc as INPUT's and indeed having multiple INPUT's down the same "axon".
Whether this is desirable is another question. My guess is not - better to
keep each page small & simple but massively linked to other pages, taking
the power from the links themselves.
Web pages may not be the best way to do it. It has been suggested that HTML
should be scrapped and a pure neural-network model used. I don't know how
well that would go down?
Sorry to take so long responding, I have had to do several other long
replies. This involves a lot of thinking time since I am not 100% clear (or
even 70%) on everything myself.
By the way where did you see the posting?
|