To: Frank Schuurmans
Hi Frank,
I had to split the Email into two. I was getting editing problems (couldn't
add anything). I presume the file was too large.
Sorry to take so long to reply but I have been busy on the Website, which
is pretty well up to date now. It took ages to create the pages and links, a
lot of it manually. What we really need is a weblet HTML publisher/editor
as priority number 1!!
I would like to mention a bit of terminology. The term "webneuron" is
restricted to a software neuron at the Web level. The concept of software
neurons can be applied at any level of computing. Let's call a generalised
software neuron at any level a "hub" (dictionary - central point of interest).
Also, the term "weblet" is similarly restrictive. Let's use "brainlet" for a
network of software neurons at any level of computing. I am keeping well
clear of neural network terms (like "node")since I think brainlets are
superior. Some people are wrongly assuming that we just have a neural
net.
Hub and brainlet are both Paul's idea, and seem to us to have popular
appeal.
I should also say that there is a whole lot of theory about hubs that I have
not yet mentioned for fear of confusing everyone (including me). Before
the webneuron idea I looked into developing hubs at assembler level.
> I have spend quite some time thinking about webneurons. My interest in
> webneurons is a result from my interest in biology
> (my study) and my interest in computing/the Net (my business).
Do you earn a living from the Internet? I thought this was very difficult.
How do you earn money if you don't mind me asking?
>I haven't thought a lot about webneurons before but on a more biological
way >of using/programming computers (from my conviction that
biological systems are
>superior: they have been tested for millions of years).
I totally agree. I wonder why this principle wasn't used from the start of
computing.
>The image you sent showed something new to me: Page 3 which has to
be fired
>by two other pages. This implies this pages has some memory if it has
been
>fired, by which webneuron, and by which process (multi-tasking!).
>My first understanding of webneurons was that the are quite ignorant of
each-
>other...
Yes, a page must be able to have a memory. If page 2 fires page 3 first,
then page 3 must know that, when page 4 fires it next. We may need an
extension something like:-
fred<|MEM>
This would be a readable/writeable storage location in the HTML page
(called status). The tag could go anywhere in the page. In the example
above status = "fred". But it could also be say:-
<|MEM>
In which case status would be a graphic item and we could then do
operations on graphics such as adding one picture to another, or changing
brightness/colour etc under program control. The sky's the limit.
We would need the ability to read and write to it while the page was being
interpreted.
If page 3 contains the code:-
IF caller = page2
IF status = "page4 called"
FIRE page10
status = "just fired"
ELSE
status = "page2 called"
ENDIF
ENDIF
IF caller = page4
IF status = "page2 called"
FIRE page10
status = "just fired"
ELSE
status = "page4 called"
ENDIF
ENDIF
<|PROG>
Then page10 would only fire after both callers had called.
Status has to be saved with the HTML of the page, even if it goes to hard
disc. There is no other way I can think of. Thus we have to permanently
alter the
HTML during the process of interpretation. If we don't allow this then we
are going nowhere. The brain obviously has this feature. Each neuron can
keep a memory of how much it has been fired recently and use that to
adjust its normal response to further firing.
What I think would happen is that we would develop a family of standard
type webneurons with code already embedded. We could then pick from
this family the type that suited each purpose. So we wouldn't have to keep
writing the code again and again. It would be more like building by
plugging in components.
>Your email d.d. June 6:
>
>> About the question of speed:-
>>
>> I can see a big demand for fast local weblets all on the one machine,
>> and especially a client machine. Weblets would replace JAVA and
"applets".
>
>I don't think webneurons will ever be able to beat Java on raw processing
>speed: Interpreted code is always slower compaired to native code. Since
Java
>is interpreted code very close to native code it is fast.
Maybe we can use the JAVA code for our own purposes inside the page
then. But can it permanently store strings etc as for the tag
above? And could it implement a command similar to FIRE. Please see a
reference to "TEX" on the Website at:-
http://brain.eu.org/585.htm (it's at the bottom)
When I said "fast local weblets" I only meant that if the weblet was
distributed over several machines then processing could be very slow
indeed, so local and downloadable weblets could be important.
>Secondly if webneurons only exist in the form of HTML-documents all
documents
>have to be accessed/read and a lot of unnessary (comments, HTML
markup, etc)
>info is included in those files slowing the system even more.
I see sections going at the top of pages and being executed
before any other HTML markup is encountered. In addition I see lots of
very short pages so your drawback should be minimised. If our system is
sucessfull then there will be plenty of people finding ways to speed it up. I
wouldn't worry too much about speed yet. The real increase in speed will
come when they start building weblet based massively parallel
microprocessors!
It is interesting to speculate whether HTML could go down to that low
level. I think it might. Microsoft are already (from April 96) building Java
into Windows. This is a big come-down for them. So the trend is to
replace conventional operating systems with Internet stuff. If HTML wins
over Java then presumably HTML can be taken down into Windows,
MacOS, UNIX etc too. We could end up with an HTML OS. But only
with weblets! Whatever the future holds I think it will be based on hubs
and brainlets, whether HTML or not.
>This might mean we have to leave the idea of processing the pages
themselves >but using them as a interface to the system and storing the
table data in one >file or a relational database (how is your SQL
knowledge :) )
I have programmed a relational database in dbase4, and converted my
property business using Excel (non-relational). I know little of SQL.
HTML has tables, couldn't we use that?
It doesn't really matter how we do it, as long as we do it, and it has the
properties we want.
>> I think we want to be able to "run" local weblets.
>The question of speed and the ability to run weblets locally may well
mean
>we've got to use Java.
I think with HTML it could still be fast enough for many purposes. The
main problem is that each page is a separate file. We may need to pack
many small pages into one file. This is how Netscape handles Email and
Free Agent stores news postings.
I have no objection to trying to use Java for a weblet operating system (if it
can be done). I only object if Java is used instead of weblet applications. I
think you will find that once you have proved it with Java, then a lot of
what Java is itself could be replaced by weblets. We should only need a
small non-weblet operating system kernel, for example a stacking system
to handle the multi-threading, and a bit of unique code for interfacing to
the hardware of each different computer, to handle the screen/hard disc
interface etc.
>> Consequently I think the ability to interpret weblets must be built in to
>> the client browser, and also into the server. If we ignore the browser
>> aspect then Java will continue, when it should be replaced by
webneuron
>> programming.
>I'm not sure I understand your problem with Java.
As above. No problem as a trial operating system.
>> On the server side, if say we wanted to rewrite Lycos or create our
own
>> Internet search engine in weblet form, then we would want it as fast as
>> possible and probably running on one machine. Could PERL do it?
>I think Perl is the way to go in developing a webneuron server-side-only
>system.
Can we also do this with Java?
>> The conclusion I seem to be coming to is that we have to write a
combined
>> interpreter/browser that will operate at both client and server levels,
>> and be capable of running weblets as well as presenting standard
HTML to
>> the client screen. A new Netscape!!
>Since HTML is still rapidly changing it might be a good idea to let
>Netscape and Microsoft fight out their battle of the browsers while
>developing and optimizing the system. If the system is really good it
>would not be difficult to write a browser using webneurons.
I need a decision here on which way to go. Do I get Linux and use Perl, or
do we go for Java?
>> If we want weblets say to control say animated graphical outputs to the
>> client's VDU screen then for speed reasons it would be important to
download
>> the weblets to the client, where they would be interpreted by our
interpreter
>As I understand a weblet is a collection of HTML pages, It wouldn't
suprise me
>if running the program remotely (from the clients view) and sending
some
>control and image data to the browser is faster compaired to downloading
a
>bunch of HTML pages
But the executable HTML pages can be very small. In this case I am pretty
sure that for most screen applications, operating in real time using a
modem line would be impractical. I am proposing that downloadable
weblets replace most instances of downloadable Java applets (unless speed
is super critical).
>in this case spreading
>a program on several files reduces the speed even more since the
network is
>used)
Not if weblet pages are grouped in one file.
If we extrapolate this principle we will end up at the assembler level with
webneurons just as structures in RAM and no files.
I don't believe in files. The only building blocks we need are structures
such as webneurons, webaxons and webstores. Each weblet could contain
a "contents" webneuron with a list of every other structure in the weblet.
Each application could have contents webneuron with a list of each weblet
contents page. Plus the operating system could have a list of each
application contents page. To transfer webneurons/weblets/applications to
other machines, the only thing you need is the contents webneurons, not a
file manager. How the webneurons should ideally be stored on hard disc is
an OS problem I have not thought about much yet.
>But only a working system can tell us the fastest solution.
Absolutely.
>I don't know of any perl browsers there is a perl http server, but basic (
>text only) handeling of webneurons would not be to difficult using a perl
>script and hacking the server.
Surely we need to be able to do everything Netscape does + we need to
handle webneurons.
>Maybe we should clear up some of the neuron confusion in my head :) I
know
>of the following neurons:
>
> A) The biological ones
> B) The simulated ones from AI.
> C) Webneurons
>
>If a C has operators inside of them I suspect you want to model A. The
>problem with neurons (A and B) is that you don't program them but try
teach
>them something.
We should be able to do both with webneurons. You realise that all of this
is very difficult to predict since I have never had a brainlet programming
system to experiment on. It is all imagination.
>If this is your idea of webneurons I don't understand the
>use of operators in webneurons and drawing maps of them.
No, webneurons are more than a neural network. They are a fully
'programmable' system (this must include operators). You could also set
them up to learn like a neural-net.
>how many webneurons would you expect we need to program a simple
weblet.
A weblet could technically be any size, but I see them as similar to
subroutines in conventional languages e.g. small calculations, a bit of
graphics/string handling, etc. I suppose a typical weblet could have 10
webneurons or so. But then we will stitch weblets together to perform
more complex applications. I don't quite know how all that will turn out
because I have never had anything to experiment on.
>do they need names (or numbers), how would you keep track of them
all?
If you look at how the Website is arranged you can see the way I think it
will work. You keep track of them mainly by the structure/map of the
links. It is remarkably easy to remember your way around providing each
target also has a link back to the caller, which is true on our Website but
not on the WWW. This is a big problem with the WWW - very few
explicit links back to callers - only links to targets, so you can't draw a
complete map starting from any page - The WWW is not like the brain at
all (but it could be). Starting from any neuron in the brain you could trace
out a complete map of it!
There is no problem in giving names to each page/webneuron (see page
names on the Website). These do not have to be unique. Each page has a
filename/URL to locate it. There are a virtually endless address space of
unique filenames/URL's. This is one reason I like using web pages - no
16/32 bit address space limits. We don't have to change anything when
computers go to 64
On my local system I just use numbers for filenames e.g 123.htm. I don't
think filenames/URL's are important as a location method. What is
important is
1) The map of the system
2) The name of each webneuron/page (non unique)
3) Directories
If you want to FIRE a webneuron then you could first manually find it
using the above options. Then a permanent link can be established. You
could also find a webneuron automatically under program control perhaps
using the directories or even by automatic forward/backward tracing using
the map. There are many possibilities.
Regarding directories: One feature I like is full text indexing, where every
word on a Website is indexed. This has drastic implications for
webneurons if I stick to my principles of using them for everything. We
need one index webneuron for every different word in the Website. Each
index webneuron must have links to every page where its word appears.
Thus it is essential to have automatic page/link creation as new words are
created/added. I could go completely ridiculous here and suggest that long
text is broken up into small units each of which is enclosed in a
webneuron. On the Website I allowed complete postings (often many
pages) to be added to the end of the "webneuron". This is already giving an
indexing problem, which would be alleviated if the text was broken into
smaller pieces each in their own "webneuron" with linking capability at the
top of the page.
By the way I would call the Website pages "webneurons" but without
firing ability etc.
We can also have directories of various subclasses e.g people, graphic
files, Emails, News, weblet libraries, etc. I have done a small one for all
the people who replied to the original webneuron posting. Its at:-
http://brain.eu.org/652.htm
Directory webneurons can be created automatically once we have a weblet
interpreter and editor.
>> It may be advantageous to have an EVAL type statement that would
allow
>> us to create a line of HTML using strings and embed it into or on the
>> end of the page.
>
>I hope you don't mean a function you can pass system calls to (like the
perl
>eval function). Then we've got a potential security hole the size of
>the Titanic.
Could you tell me more about this please. What causes the security
problem?
|