lcJohn's Emails

Webfuns and websubs (Re: webneurons part 2)

Home dic local
lcwebneurons
ltWebfuns
ltWebsubs
   Date:  21 June 96
     To:  Frank Schuurmans <frank@bio.vu.nl>
   From:  john@brain.eu.org (John Middlemas)
Subject:  Webfuns and websubs (Re: webneurons part 2)

Hi again,


>yes, could you explain the difference between a webneuron a weblet, websub 
>and webfun to me?

As I said webfun/websubs are a difficult area which I do not understand 100%. 
I am not saying that they are really needed, although I think they will be. 
This must be verified by trial and error.

As a guess, I see the following system:-

APPLICATIONS (large weblets). They could be distributed across many computers.

   CUSTOM WEBLETS - smaller weblets for one-off purposes, or for building 
                    applications.
                                                    
   WEBNEURON LIBRARY - commonly used webneurons with script, webaxons, 
		       webstores and any attached webfuns included. 
                       Used to build weblets.

   WEBLET LIBRARY - commonly required (small) weblets used to help build
                    applications

       WEBFUN - For use inside a host webneuron

           ATTACHED:  Storage required, so copy library original, 
                      and attach copy permanently to host.

           TEMP:  No storage required.
                
                TEMPLIB:  Attach library original temporarily to the host.
        
                TEMPHOST:  Copy and attach to host, use, then delete. 
			   This method would have to be used if more 
			   than one caller wanted to use the library 
			   original at the same time.
                  
       WEBSUB - For use in a weblet, not inside a host webneuron.

           HARDCOPIED:  For copying permanently into a weblet when editing,
                        thereby increasing the weblet size. 

           TEMP:  Activated by the FIRE command. Some input(s) to the 
                  websub must have previously been set to the URL(s) 
		  where the results must go, otherwise the websub would 
		  not know where to send them. I can think of an immediate 
		  use here for "Webmail" processing. One input 
		  would be the mail itself, another would be the recipient 
		  URL. The mail would then be stored in a webstore of the 
		  recipient. The websub would have to make sure it inserted 
		  links to the caller in all final targets and vice-versa, 
		  in order to maintain the mappability and inherent indexing
                  ability of the network.  Temp Websubs may also have good
                  potential as editors.

I suppose the whole network would then consist of all the applications in the 
world, presumably linked in some way, into a "super-brain".

By "storage required", I mean that some of the webneurons in the 
library weblet may have to store data for later use after completing the 
first firing, therefore this library weblet must be copied to the first 
firing webneuron, and that copy cannot be used by a different firing 
webneuron, in case the stored data is corrupted. the next firing 
webneuron goes to the library copy.

We would need to allow for multi-level nesting, i.e. webfuns/subs within 
webfuns/subs.

I will try and get a few pictures up on the website shortly, to 
illustrate these ideas. There will be a pictures link on the homepage.

A webneuron is a subclass of a hub (described above). It is a software 
neuron implemented using an extended HTML page and capable of firing 
and being fired. A hub is ANY implementation of a software neuron, for 
example at the assembler level. I have done a some work on hubs at this 
level.

A weblet is not well defined. It is at least two webneurons linked 
together so that a map can be drawn. However, it is intended to refer 
to a small group of mappable linked webneurons that do a simple task. 
A weblet can be linked to another weblet thereby forming a larger 
weblet, and so on, till we have huge numbers of linked/mappable webneurons. 
This would still technically be a weblet, and all homogeneous. 

If you can think of better terminology then please do, because I am not 
particularly happy with the ambiguous "size" problem of weblets. 
A weblet is strictly speaking any subset of the whole network.

A webfun would operate only WITHIN a webneuron, perhaps to provide a 
complex internal programming operation. I think they will be necessary. 
A webfun/websub is built from webneurons. All output from a webfun is 
returned to the same host webneuron that fired it and to which it may 
be attached. This is different from a websub which can return output 
to different webneurons.

Inside a biological neuron there are structures (such as mitochondria) 
that may perform complex functions on either stored information or even 
real time control of input/output levels. Webfuns would do this job. 
They are not explicitly linked to the host webneuron.  Attached webfuns 
could "wander round" the host maybe monitoring things over a 
long period of time, by self activating at regular time intervals. 

With a non-attached webfun You wouldn't be able to go to a webfun and 
get a list of all its callers. Webfuns are not part of the main map, 
but each would have its own local map.

If a webfun needed to store long term data relating specifically to a 
particular host page, then that webfun is tied to the host. Webfuns that j
ust do one quick calculation do not need to be tied, and can be part of a 
general purpose webfun library, each used by many different hosts.

A webfun would usually be fired but it is possible that they could self 
activate at regular time intervals. I mentioned about timed firings in 
the original posting. Thus we have the possibility of webfuns with a 
life of their own perhaps policing the internal workings of their host 
page (webneuron). To return data to the host they would have to directly 
alter host <MEM> locations. 

Of course, it is possible to have nested webfuns, in the same way as 
subroutines are nested, but I'll leave this alone for the moment.

The theory of webfuns is quite complex and there are many other possibilities 
that I haven't discussed or even thought of. The best thing we can do is 
to create a simple basic webneuron system with no webfuns/websubs and see 
how things develop in the light of real applications.

A websub operates outside the webneuron, but is still made of linked webneurons. 
There is a strange result from all this. There is no problem linking 
applications since all you do is to form links from one application to 
another in the same way as normal. Two applications can then merge seamlessly. 
This is a big advantage over conventional computing where there are many 
problems linking applications. The strange result is this - that because 
it will be so easy to link applications then the whole software world may 
eventually merge into a huge "brain". This may include the 
present (non-programmable) WWW as we know it. There will be many copies 
of each library weblet spread around the world. All data is held in 
webaxons and webstores. There is no other way to store data at all.

I have not covered hormones, which are akin to GLOBAL variables. Global
variables are frowned upon in conventional languages, but when programming 
a large job there always seem to be a few trying to emerge so they must be 
fundamental. To model hormones we may have to allow free ranging hormone 
weblets, to wander about, perhaps confined to a particular subset of the 
network. The multi-tasking nature of webneurons makes this more possible. 
They could operate on a time delay, coming to life every so often without 
actually being fired and therefore having no external links, but perhaps 
being able to modify webaxon or webstore data, of webneurons in their 
immediate "vicinity". How you determine "vicinity" 
I don't know. I don't think they should actually fire anything.

>> A network of brain neurons is hardwired.
>
>and later in the message you write:
>
>> The process of learning may involve neuron creation/deletion, especially in 
>> the foetus I am told. I think automatic page/link  creation/deletion is a 
>> definite bonus. Besides, why restrict ourselves.
>
>This seems somewhat paradoxal to me, could you explain your ideas on a network 
>being hardwired and automatic link/page creation and deletion?

Sorry, yes it does sound a bit contradictory. Hardwiring is where you can 
draw a map starting from any point in the system. Creating and deleting 
links/pages changes the map slightly but the system is still hardwired
because a map can still be drawn from any point. When you delete a page 
any loose link ends should be sorted out.

The point I was trying to make about hardwiring is that conventional 
programming systems and even the WWW are not hardwired and no map can 
be drawn. Therefore they are not like the brain at all. The ability to 
construct a map I believe is a fundamental property of a correct 
programming system. You can map a neural network, perhaps that is why 
they can do things conventional computing cannot. But there are other 
fundamental properties such as programmability that neural nets don't have.

>> However, there are at least two ways a weblet subroutine/function 
>> (websub/webfun) could possibly be useful.
>> WAY 1 - If you really needed to do some function within the webneuron. 
>> you could fire a weblet to do it and it could return result(s) 
>> WITHIN the webneuron page. Call this type of weblet a webfun:-
>
>> <RESULT Street, Town><FIRE href="webfun URL" 
>> PARAMS=Address, Mr Smith></RESULT>
>> (I think "PARAMS" is better than "INPUTS")
>
>using my cgi-idea:
>neuron 1 (query.html) would contain: 
> <TR>
>  <TD>! address</TD>
>  <TD>Mr Smith</TD>
>  <TD><A HREF="/cgi-bin/fire/address.html">address</A></TD></TR>
>
>and neuron2 (address.html):
>
> <TR>
>  <TD>== Mr Smith</TD>
>  <TD>Street, Town</TD>
>  <TD><A HREF="/cgi-bin/fire/query.html">query</A></TD></TR>

I don't see how neuron 2 can return Mr Smith's address to neuron 1. If 
Mr Smith's address was say "12 longway road, Brighton" then 
this should be transmitted back to query.html which must then set 
Street="12 longway road" and Town="Brighton". 
Wouldn't your example create an infinite loop with query firing address, 
address firing query, query firing address and so on?

>> WAY 2 - 
>
>> Say you have a standard weblet that is used time and time again in many 
>> circumstances. Call this a websub. It may return something to the caller 
>> by firing param(s) back to it or it may return nothing. But it is not 
>> internal like the the webfun.
>
>I don't understand the 'internalness' of a webfun do you mean a SSI-like
>thing. 

The webfun is internal because it is fired by the host webneuron and can 
only return results to the same host webneuron. Results are returned in a 
more direct way than by firing them back, and the host script can still 
be running after all results have been returned. A webfun would have no 
life or links outside the host.

>> There are at least two options for handling websubs:-
>> OPTION A
>> Copy the websub from say a central pool of such standard websubs, and 
>> "slot it in". 

>> Unfortunately, this means all the websub pages would be duplicated as 
>> many times as it was copied, and any editing of the copied websub must 
>> usually be applied to all the copies. This makes maintenance difficult, 
>> and wastes disk space. Multiple copies would occur even if the websub 
>> is fired from different pages on the SAME machine. I realise you may 
>> have trouble grasping some of this. I have been thinking about it for 
>> about 10 years.

>I don't understand how you can't do without making file instances of the
>HTML neurons,

Don't really understand this question. At present each HTML neuron 
(webneuron) is a separate file. This is not good, I know. Ideally, 
there would be no files, only "contents" webneurons, which 
I mentioned elsewhere recently.

>how could you handle multi user requests

By making temporary copies of TEMP websubs and TEMPHOST webfuns 
(please see above). 

>how can page 3 (in your gif diagram) remember it has already been or 
not yet >been called, surely not by storing this kind of information 
in the code of the >neuron itself.

It is normal for example even in Microsoft Excel macros, to store such 
information on the end of the macro, or in persistent "local" 
variables. When you call the macro again it can then know it was called 
before and by who. We could certainly store such data on the end of the 
HTML or even in the middle of it. 

However, I think the concept of webstores is better. Store things in separate 
storage pages linked to the target. You never know how big, how complex or 
of what type the data could be, but whatever it is, it will usually go 
in a whole page quite nicely. Let the interpreter read and write data. 
In addition, with the concept of webaxons, data can now be passed to the 
target without firing it.

It will also be useful in each webaxon to have a flag of some description 
to say whether the data has changed since the last time the target 
webneuron was fired. When you viewed the webneuron, perhaps during 
debugging you could maybe see a marker next to all the inputs (and outputs) 
that had so changed.

In this way we could then add some neat extensions. What about:-

IF <ALL> 
   <FIRE href="...">
   <RESET>
ENDIF

IF <ANY> 
   <FIRE href="...">
   <RESET>
ENDIF

Where <ALL> is only true when all inputs have been changed and 
<ANY> when one or more has changed. <RESET> would clear 
all the markers.

Speaking of "ALL", what about <CASCADE> to fire all outputs 
in one go.

We should also allow <ABORT href="..."> to remove a 
webneuron from any stack it might be waiting in.

Incidentally, it would also be a nice touch in ordinary web browsing 
if when going from one page to another, that any links back to the 
caller page were highlighted in the target. A lot of people don't know 
about pressing the right mouse key, and it's harder to locate the usual 
back arrow key. Anyway, it feels better to use links actually on the page.

If all the features of Netscape were not bolted in tight, then we 
could put through such amendments in an instant. I hope weblets do not 
become edit locked like this.

>> OPTION B
>> There would be only one master copy of the websub, and all pages wanting 
>> to use it would fire it in the normal way. The websub should also have a 
>> complete list of every firing page. This is good for tracing and indexing 
>> things. You may not agree on this. It could be a very big list for popular 
>> websubs.
>
>You're right I don't agree, it makes the system to complex and closed: imagine 
>this practiced for the web. 

I am only trying to copy the way the brain works. Which is a principle I 
think we have agreed on. Each brain neuron is physically connected to all 
its input brain neurons, so a list can be made, and a map can be drawn. 
One of the problems with computing is that there is too much chaos. 
You don't know what is calling what. Look at a road map, at each junction 
("neuron") we can make a list of all connecting roads, this is 
very orderly, and it works.

>Further more the number of files increases by a 
>factor 2, slowing the system...

I would say it was better to get the right system first, and worry about 
speed later.

>> There is a problem here, because if the websub is fired again from elsewhere 
>> while still processing a previous firing, then what do you do? 
>
>I don't see the problem

The problem is the same as your question above about how to handle multi-users.
I think the answer is to make temporary copies.

... next installment coming your way soon.