To: Frank Schuurmans
Hi Frank,
>As I understand it a webaxon is for storing/sending data
>to another webneuron. And a webstore is for storing 'static
>data' (for example the time data for calculating
>firing frequencies if we want to model this).
A webaxon is a webstore but with different links. The webaxon
has two links, one to caller and one to target. The webstore
just has a single link to its host webneuron. I don't think it is
quite correct to think of the webaxon as "sending" data. The
target script can do whatever it likes with with the input data.
The script may choose to "send" data forward to other webneurons
but my guess is that in most cases, "large" data will just mostly
stay put in the axon and be controlled or modified in situ by links
and script rather than by moving it around.
In fact, I am not sure that we should have "large" data at
all. With webneurons we can chop data up into small pieces and
use the links to build things up. As I understand the brain, this is
exactly how it recognises an image. The original image is split into
lines/colours etc. There is no "large" data in the brain for say an
image of your brother, just perhaps a final neuron at the end of the
chain that is activated only by his face. This is like a computer
neural network.
>The use of the LUT is not complete clear. I understand
>CALLER, TARGET, HOST are looked up in the LUT which returns
>the address. Is this correct?
Yes, the LUT just allows host script to locate all the neighbouring structures
and data when running. Sorry, there was not really enough room on the gif to sort out
all the detail. The LUT also allows each structure to have free movement in RAM.
this is very good for editing when say a structure gets bigger and has to
be moved to the top of RAM. Only the LUT changes, not the INDEX, so all the
links to that structure are not disturbed.
I think the LUT is analogous to a Windows FAT (File Allocation Table)
Just a way of physically finding structures (files), given the INDEX
(filename). The LUT converts the INDEX to the physical address in RAM.
Each structure (CALLER, TARGET, HOST etc) has a unique INDEX for
location. This INDEX is allocated when the structure is first created.
The very first structure ever created would have INDEX 1 say, the
next 2 and so on. You need to keep a "next available INDEX number" variable.
The LUT means we don't need to use files anymore. LINUX needs rewriting in the
long term.
> I think your views on multitasking are a bit simplistic:
> use a stack and you've got multitasking. (This is much like
> MS Windows multitasking, which is not really multitasking).
I don't know how Windows multi-tasking works. Does it have a stack of
programs? In our case the stack is of webneurons, which are very
small structures compared to programs, so the multi-tasking should
be very smooth. I haven't done much on mutlti-tasking. Do you
know a better way to implement it?
To more closely simulate the brain the stack can be even be eliminated as
I already suggested in my Email to you "closer to the brain". By rapidly
checking each host webneuron in looping sequence, using the LUT, the kernel
could see if any host inputs (or other things near the host) had changed
and hence the full host script needed running. This would be practical
provided the number of webneurons was limited. It has ties to parallel
processing because what may happen in the final analysis, is one processor
per webneuron! Then we are talking Pentiums bye bye, as well as Microsoft.
Everything must eventually shift to parallel processing anyway.
>As I understand the XLUT and LUT is that they are an array
>of pointers.
Yes, but the XLUT point to a external computer INDEX not RAM
> I would NEVER ALLOW ANOTHER COMPUTER TO
> DIRECTLY MAKE CALLS TO MY RAM.
The LUT is not an issue here since it is on your local machine
and points only to your local machine. I presume your worry
is with the XLUT.
The XLUT points to external machines that is true, but it does
not point to RAM, only to the structure INDEX on the external
machine. The external machine has its own LUT, XLUT and webneuron
interpreter (kernel). There is no way to get to the external RAM except
via the external kernel and LUT. The kernel can have as many safety
features built in as you like. I believe that because webneurons
are so simple, it should be much easier to block malicious
attack or error attack.
If you stick to what you have said then you will not help the
full impact of the webneuron idea. There will be no distributed
"super-program" that you mentioned in a previous Email.
Are you still worried?
>On the use of language
>I think you're assembler model can easily be implemented using
>other languages as well. For example using C++ you can make a CLASS
>webneuron:
I think it can be done in any turing complete language (HTML is not
so, without extensions). I am not sure if C++ is turing complete.
Assembler is. C++ probably is.
> (webneuron.h)
>
>/* This is just a simple framework
> * for a WebNeuron Class (The classes
> * List and Axon ar not given here)
> * the syntax might not be completely correct.
> */
>
>class WebNeuron {
> public:
> Axon *fire(Axon *in); //fire function takes a pointer to (input) axon
> //as argument returns pointer to (output) axon
> private:
> Webstore *wb; // pointer to webstore
> List *outputaxons; // list of output axons
> List *inputneurons; // " " input "
> };
Yes, the webstore is local (private) data. But can data be persistent in C++?
i.e. will the data be lost after a subroutine has finished?
Sorry, I haven't programmed in C++.
>/* To write a new neuron we just have to implement the fire function
> * for example to make a Webneuron: A
> */
>
>a.ccp
>
>class A: public WebNeuron {
> public:
> Axon *fire (Axon *in) {
> // the code //
> }
>}
>
>I don't care which languages we use. But I prefer C or C++. (Linux
>is written in C).
Shall we agree to try C++ in Linux then and drop Java?
>They can also draw them using my idea
>
>webneuron A:
>
> input = (axons or webneurons)
> | | | | | = links to the map of
> the script
>
> MAP of Script (A)
>
> \ / \ / \ /
> > = > = testing output of script
> / \ / \ / \
>
> output = (axons or webneurons)
>
>This could be a template and wen you zoom out you get for example
>get
> \ / \ / \ /
> webneuronA webneuronB webneuronC
> \ \ / (B)
> webneuronD - webneuronE
> / \
>
>And you could zoom out:
>
> webletA webletB
> \ /
> < (C)
> / \
> webletC webletD
>
>As you can see the distinction between maps A B en C is quite
>arbitrary (or the names you give them depend on your point of view).
I still don't really see much difference between your idea and the
original concept, except that the script seems now to be just a single function
if you use the assembler model. For your "<" neuron (hub) we would just
use a single assembler function customised for the "<" operator. This
function would know to compare say input1 < input2 rather than
input2 < input1, which I don't think you have covered.
You still have to have everything else as for my assembler model
(caller/target links etc) don't you?
Nevertheless, the idea to restrict the script to a single function (operator)
is a good one I think. Please confirm that is all you intend to do.
Weblets were always completely zoomable from the start. There is really
no such thing as a weblet, just a huge network of mappable linked
webneurons, as in the brain. You could zoom from single neurons right
up to say the hippocampus (if you had a good enough scanner).
>As I've mentioned before the largest part of the problem (wether we
>use my model or yours) is the interface.
To understand your model it would help if you could draw out your assembler
version in the same sort of way as I have done for mine. Then we can see
precisely the differences and how yours works in detail. Sorry, but I
cannot fully understand your idea with just text and rough diagrams. I think
we both understand the basics of assembler whereas I don't know C++ and the
Class system. I will try to learn it though. For the moment can we try to
make the important decisions using an assembler model because it is so simple
and basic. Once decided, we can transfer the system to C++.
Regards,
|