To: Frank Schuurmans
Hi Frank,
>I think Perl is the way to go in developing a web-
>neuron system, for the following reasons:
>- Interpeted language.
Yes, I hate compilers.
>- Most Internet hosts are un*x, If we want the system to be used (tested) on
> the Net , it should
> run under un*x and system administrators should be able to check the source
> (security issues).
Sounds very reasonable.
>- Perl is very good for handeling strings, this we need if we want the script
> to extract data from HTML docs.
>One drawback might be the speed of perl, but I think this is no problem as
>netwerk connections are the speed limiting factor.
About the question of speed:-
I can see a big demand for fast local weblets all on the one machine, and especially a
client machine. Weblets would replace JAVA and "applets".
The webneurons can be all on the same machine (local), distributed across many
machines or a mixture of both. As you probably know, while running say Netscape on
a client, it is possible to look at local pages (maybe self designed) on your own
computer, as well as pages on the main World Wide Web. To view a local page you
just use the filename e.g. file:///fred.html as the URL rather than something like
http://www.abc.com/fred.html. I think we want to be able to "run" local weblets.
Consequently I think the ability to interpret weblets must be built in to the client
browser, and also into the server. If we ignore the browser aspect then Java will
continue, when it should be replaced by webneuron programming.
On the server side, if say we wanted to rewrite Lycos or create our own Internet
search engine in weblet form, then we would want it as fast as possible and probably
running on one machine. Could PERL do it?
The conclusion I seem to be coming to is that we have to write a combined
interpreter/browser that will operate at both client and server levels, and be capable of
running weblets as well as presenting standard HTML to the client screen. A new
Netscape!!
If we want weblets say to control say animated graphical outputs to the client's VDU
screen then for speed reasons it would be important to download the weblets to the
client, where they would be interpreted by our interpreter on the client machine and
graphics presented directly. It would be far too slow if the weblet was server based
since you would have to keep going backwards and forwards to the server everytime
you wanted to move a client graphical object.
However, just to prove the principle you may be quite correct to suggest PERL. Can
we install PERL on client machines and use a PERL script for browsing? Are there
any PERL browsers available now?
>It is not clear to me how you see the processing
>capabilities of one page.
Very simple use of IF ELSE, and + - > etc operators, and data storage within (or on
the end of) the HTML. This is preliminary and based on the address book example I
worked through. Everything is early days at the moment.
>In programming the the following
>concepts (with -> my understanding of how it is implemented
>in the webneurons):
>- Data (types) -> in HTML
Yes. Data input via the FIRE command will sometimes need to be stored (embedded)
in the HTML or on the end for later use. Sometimes it will be operated on
immediately via IF ELSE. All this is very preliminary. At the moment I see data
mainly as strings but integers, real numbers, even jpg's, audio etc could be useful -
who knows till we have a system we can experiment on.
It may be advantageous to have an EVAL type statement that would allow us to create
a line of HTML using strings and embed it into or on the end of the page.
>- Operators -> in HTML
You mean + - > etc? They must go in the page. I can't see anywhere else for them.
But hopefully as few as possible. I prefer the concept of lots of small pages with
extremely simple coding. I believe that is how the brain does it. Don't hold me on this
one.
>- Statements -> 'if' in HTML other statements in the
> structure (the links)
Yes. We could allow WHILE in the HTML but I would say that is the wrong way to
go. I am not sure about everything though. It is early days.
>- Subroutines, functions etc -> in links
This is very tricky.
There is no equivalent of the subroutine in the brain, unless you consider action within
the neuron as such. A network of brain neurons is hardwired.
However, there are at least two ways a weblet subroutine/function (websub/webfun)
could possibly be useful.
WAY 1 - If you really needed to do some function within the webneuron.
you could fire a weblet to do it and it could return result(s) WITHIN the webneuron
page. Call this type of weblet a webfun:-
|RESULT>
(I think "PARAMS" is better than "INPUTS")
Street and Town would be persistent, i.e. stored on the end of the page or something,
and never lost until overwritten.
The Interpreter would have to wait until the webfun had returned the result(s). This
could take some time if the webfun URL was not on the same machine, and the effect
on any local multi-tasking needs careful consideration. Things would be even more
interesting if the webfun itself was distributed across several machines. This latter
problem diminishes with time, as bandwith and speed on the Internet increases.
However, for large distances between machines, the speed of light remains a barrier.
Even more intricate would be if webneurons in the webfun, themselves also called
webfuns (nesting). No one said it was going to be easy!
WAY 2 -
Say you have a standard weblet that is used time and time again in many
circumstances. Call this a websub. It may return something to the caller by firing
param(s) back to it or it may return nothing. But it is not internal like the the webfun.
There are at least two options for handling websubs:-
OPTION A
Copy the websub from say a central pool of such standard websubs, and "slot it in".
Unfortunately, this means all the websub pages would be duplicated as many times as
it was copied, and any editing of the copied websub must usually be applied to all the
copies. This makes maintenance difficult, and wastes disk space. Multiple copies
would occur even if the websub is fired from different pages on the SAME machine. I
realise you may have trouble grasping some of this. I have been thinking about it for
about 10 years.
OPTION B
There would be only one master copy of the websub, and all pages wanting to use it
would fire it in the normal way. The websub should also have a complete list of every
firing page. This is good for tracing and indexing things. You may not agree on this. It
could be a very big list for popular websubs.
There is a problem here, because if the websub is fired again from elsewhere while
still processing a previous firing, then what do you do?
1) You can make the other firer(s) wait till the first is finished. But what if there were
hundreds of waiting firers from all over the world?
2) You could create temporary copies of the master, and use those to serve any
queues. Thus you would have a websub server! But what if the websub fired a
websub? Possibly a bit messy?
OPTION B 2) seems like the way to go, but quite involved. For the moment we could
use OPTION A or none at all - don't allow websubs - just hardwire everything. At
least we could get started easier.
>Maybe a webneuron could be called a 'web unit of processing' (WebUp or
>wup).
Do you think a webneuron can do more than a real brain neuron then? I am interested
in this. I have contact with someone who knows about real brain neurons and we can
ask him any questions we like.
Shall we stick with webneuron for the moment. But do you like "weblet"?
>> Triggering of other pages (this can give multi-tasking capabilities)
>> Data to be passed between pages
>> Data to be stored in pages, and named
>> Simple logic flow (IF, ELSE) acting on stored data names
>> String handling
>
>Perl is good in string handeling!
>
>> Links to be made/broken
>> Pages to be created/deleted
>
>Should the weblet be allowed to do those things or only the programmer?
Automatic page creation is common in search engines, and a weblet based search
engine would definitely be on my list.
The process of learning may involve neuron creation/deletion, especially in the foetus I
am told. I think automatic page/link creation/deletion is a definite
bonus. Besides, why restrict ourselves.
>> (things I didn't think of yet)
>
>What about returning some output to the caller
Output can be returned to the caller by firing params back to it, and so is covered by "
Triggering of other pages" above. The caller's URL can be held by a "result"
webneuron which can then fire params back to it (please see the .gif I sent).
>this is in my view the trickiest part of the webneurons since several
>webneurons could return output to one call and the system doesn't consist of
>one to one links
Yes, but in this case the caller could have some code that told it to wait until all the
outputs had been returned. We have to try a lot of real examples to see how it would
work in practice. Certainly it is a strange and unusual way to program, but that may be
a good thing, since the software industry is in a mess and needs something radical.
>it is not possible to let the webbrowser fire all the neurons
I presume you are talking about a neuron that itself fires many neurons. The situation
is complex.
For example, say neuron A fires neurons B, C, & D, neuron B fires B1, B2, C fires
C1, C2, and D fires D1, D2. We have a problem with all these firings! How can the
webbrowser remember to do everything and in what order should it do them if it could
remember.
There is no problem if B, C and D are on different machines because the webbrowser
can just fire them off to the various machines and forget about them.
However, if B, C, D are on the same machine, that machine has to handle them all
itself. This is a problem of multi-tasking.
To remember things and interpret things in an even fashion I suggest our interpreter
uses the following stack system:-
A B C D B1 B2 C1 C2 D1 D2
C D B1 B2 C1 C2 D1 D2
D B1 B2 C1 C2 D1 D2
B2 C1 C2 D1 D2
C2 D1 D2
D2
where the webneuron at the top of a column is the next one to be interpreted out of that
column, and actual interpretation follows the order:-
A B C D B1 B2 C1 C2 D1 D2 i.e. the top row
So firing a webneuron does not mean it will be immediately interpreted since there
maybe others ahead of it in the queue.
A is interpreted first and fires B, C, & D. A is then removed from the stack. B goes on
top of column 2 because it was "fired" before C and D in A's HTML. B is then
interpreted and fires B1 and B2 which go on the bottom of the stack as illustrated in
column 3 - etc, etc.
This stack system can be implemented using a stack pointer and a bit of logic.
>so neurons should send the IP-number of the original caller to each neuron
>they fire or the first neuron should return the output. (I think only the
>last solution is realistic: a user should call an 'output neuron')
I presume you mean URL when you say IP-number. IP number is the number of the
computer.
I don't know why you say only the first neuron should return the output? This is not
the way the brain works. It would be extremely restrictive and would mean that loops
could not be done using the links, so we would have to keep all the old bad habits
(WHILE statements etc). I believe we should have the freedom to draw webneuron
maps in any fashion we choose as in the example weblet gif.
If you look at the address book example you will see that the caller URL does not
have to be passed from webneuron to webneuron. The last webneuron in the line is
hardwired to fire the original caller. You could do it your way too, but what you are
describing is sort of like the webfun concept I mentioned above, which may be
desirable, but should not stop everything else. I think weblets are best understood by
maps.
>>You also need an interpreter to go through the code in each Webon.
>Another plus using perl, we already have the interpeter, we just have to
>'parse' the HTML files.
But how difficult is it to write a complete client browser in PERL. I have heard you
can't use PERL with Windows 3.1?
>In the above example 'fire' is the processing script excuted at the server
>side (not by netscape). It reads foo.html which containts foo2.html. The
>script then calls /fire/foo2.html. (so the script contains some 'browser'
>functionality, this is not as diffcult as it sounds, I've once written a
>script which called several Net search engines (like Meta crawler an
>Savysearch do). If netscape would be used for processing the system would
>not be distributed: all webneurons have to be send to the caller(browser)
>and processed by the browser, making the system very slow and complex. a
>plugin or java script would be nessecary. You would put the security at the
>caller side and not on the source provider side (which is the big problem
>of Java although SUN claims it to be secure).
As above I think we have got to have an interpreter on the client side as well as the
server side, everthing needs to be integrated.
Regarding security: If each webneuron has a list of all the caller webneurons that are
allowed to fire it, as I have suggested, then not only would this be good for generating
a map for editing/debugging but the list could also possibly be used for security, i.e.
any calling webneurons not in the list would be thrown out.
>If the processing is done on the server side only, data has to be sent over the >net if a
weblet on another server is called (by the browser or by a weblet!) >which makes the
development of real code 'reuse' and 'superprograms' (a program >running on
computers all over the world) possible.
What is "real code reuse"? I am certain that "superprograms" are possible with
weblets.
Again, I think we have got to allow weblets to be downloaded to clients and
interpreted on the client machine as well. But maybe not immediately.
>>What type of CGI script do you use.. PERL, VB? I can't program anything yet
>>because I'm not set up for it.
>Perl, I will check how it runs under MS-win, since my apprentices don't
>run linux neither.
Yes, maybe I should switch to linux.
>If we go ahead, I propose we develop this as public domain software
>(under the Perl artisic license or GNU license). Maybe it is good if
>more people get involved in this project.
I read this after my Email to you about stock options. My real position is that I want to
put forward the webneuron concept as as priority. I am not that interested in making
anything out of it. I have a business renting out property so do not need money as
such. However, a lot of my time is taken up in this business, and I would rather spend
that time on computing. So if there was a way to make a living out of computing I
would.
So I am quite happy to do it all for free if that will improve progress in the world.
The stock option could be a good idea though since if there was reward for people we
might get a lot faster progress.
Please let me have your thoughts on this.
>We could setup some pages
>on the web, with the ideas, scripts, docs examples etc.
I have been working on this. My first thought was to gather together all the threads
from the newsgroups and mailing lists I originally posted to, plus our communication,
and display them as web pages. I have been designing a lot of web pages for about a
year using a 3 column table format and have a standard template I use. I use these
pages as a local "Intranet" and a handy bookmark system for interesting WWW pages.
The thought has occured to me that extending this table template could give a basic
webneuron.
>I think we need to:
>- solve the output question
Please say more about this.
>- specify a 'template' webneuron (how and which data is stored in
> the HTML file/table).
Please let me have your thoughts first.
I must say I am amazed by your responses. I am beginning to believe that we could
really get somewhere.
Look forward to your reply,
|