Subject: Re: Web Neurons
From: dtsADAM@mindspring.com (Dale Smith)
Date: 1996/06/03
Message-Id: <4ouuh8$1jco@mule2.mindspring.com>
Sender: dtsADAM@mindspring.com (Dale Smith)
References: <4nraca$c25@power2.powernet.co.uk>
Organization: ADAM Investment Services, Inc
Reply-To: dtsADAM@mindspring.com
Newsgroups: comp.ai
In comp.ai on Tue, 21 May 1996 02:29:14 GMT,
(John Middlemas) wrote:
> This is a proposal to add certain extensions to HTML to allow
> Web pages themselves to become the central units of a
> distributed (and local) programming network language.
> In other words Web pages would become like the subroutines,
> procedures, or functions of other common languages, but with the
> important difference that there would be no compulsory return to the
> source/caller. This is only an outline proposal.
> Suggested extensions to HTML to partly enable this.
> ---------------------------------------------------------------------
> <FIRE href="target page URL" INPUTS="...", "...", "...", ...... >
> <ABORT href="target page URL">
> the target page can be a local page on the calling machine or
> a page on another Web machine.
This one is an interesting idea. It seems to me that it should be
implemented in a plug-in, and not as a part of HTML. It's just too
specialized for a general-purpose publishing language.
> <ARG cat picture> Input 1 is to be called cat picture
> <ARG my message> Input 2 is to be called my message
> etc
> <IF> </IF>
> <ELSE>
> <ELSEIF>
> <FOR>
> <WHILE>
> <NEXT>
Um, HTML is interpreted as the file comes into the client. In other
words, how could the browser jump to a later part of the document when
that part has not been received? And let's not change the HTML
standard to require that all the doc be received before the HTML is
interpreted. OTOH, put it in a plug-in & see if it works.
> <WAIT REL=vble ABS=vble >
> <LOAD href="page URL">
> <SAVE href="page URL">
SAVE is a VERY bad idea. Even Java does not allow one to write to the
hard disk!
> FIRE passes the target page URL, and inputs, to the target server or
> calling machine/browser if the target page is local. The browser or
> server must now also interpret the target page and may not even pass
> it back to the caller.
> INPUTS could be anything - text, jpgs, other URLs, whole pages etc.
> They could also tell the target how to operate, e.g.. run in the
> background on the target machine, or perhaps also FIRE something back
> to the caller or to other pages.
Interesting tag. Put it into a plug-in for Netscape or IE.
> There is a possibility here for say "WebMail". A user (or program)
> could easily update/add to another Web site's pages automatically via
> the INPUTS. You would see your mail as Web pages on your server site,
> also with pictures, etc possible. The whole Internet system simplifies
> and Email servers are redundant. Security could be built in.
Um, this blase approach to security is not good. You should already be
thinking about building in security. Security is turning into the
central issue of how the Internet as a whole is developed.
> Having started the target, the caller page can forget about it.
> The target may return itself to the caller as is done at present or it
> may itself FIRE off many other pages (in the background if required)
> so a chain reaction can easily be generated. This is similar to how
> the brain works.
> ABORT deactivates the target, although a page can also self-deactivate
> say after it had finished its task.
> It may often be necessary for a target page to take input from several
> callers before doing a task. So it is essential that variables are
> persistent even when the page goes to hard disc. All variables should
> be local to their page. Perhaps they could be stored on the end of the
> HTML.
> <WAIT REL=t > means that at time t after the page was last fired it
> will activate on its own for whatever purpose. <WAIT ABS=alarm >
> reactivates it at a specific absolute clock time and date. There are
> probably more variations required.
> LOAD is included so that pages on hard disc likely to be fired can be
> loaded into RAM in advance. They are still inactive though.
> Huge programming tasks can be performed involving many computers in
> parallel.
> Conventional programming script such as IF, ELSE, etc is now locked up
> and local to its Web page where it can do less damage. Logic outside
> the Web page is handled by the way Web pages are linked together,
> similar to a neural network.
> Very short and simple use of IF, ELSE etc is recommended. The
> architecture of the Web page links should control the main logic, i.e.
> like a flow chart.
Nope, sorry, if it is on the server cuz it can do less damage, it
should stay on the server.
> It would be nice to be able to view the system as a map and zoom in
> for editing. So each page should have links to all its callers,
> targets, creator page, index page, and any local home page(s), to
> facilitate tracing and editing. The map can be constructed from this
> information. Alternatively, you could hop from page to page as usual.
We do this now!
> Variables would have to be linked into various other HTML commands to
> enable the script to actually do anything. Many other HTML extensions
> would also be necessary for a complete programming language, e.g.
> setting a text INPUT into the HTML page proper.
I don't see why HTML should be a programming language. I don't see how
a text input could be put inside a page that's already been
interpreted unless you reload-then you get the original page, unless
you send the text bit. This is already handled by the current HTML
specification.
> Why bother with over complex and fragmented languages like C, java
> etc when the structure of the Web itself can handle the job
> in a simpler and better way; not to mention the parallel and
> distributed aspects.
Because HTML is SLOW-it's interpreted! The structure of the Web can't
do any of this, unless you are talking about the extremely fluid web
of links between millions of documents. And given the fact that Web
pages can change so dramatically, if you don't have control over the
design of the page, ... well ... .
Programming seems to be migrating toward building stuff that needs to
be fast inside C/C++ modules and using some other higher-level lang to
glue them together. So I don't favor discarding everything & using
HTML. For evidence of this, look at the popularity of using VB to
access databases on PC's and mainframes.
Remember, the more features of an interpreted lang, the slower it is
to interpret. Computers are getting faster, but I don't know of anyone
who has a SPARC in their home (I know lots of people who want one
though).
> For example in writing a database application each page could be one
> data item of the database e.g "Smith". The way the pages were linked
> would determine the structure of the database and access to it.. There
> would be a lot of very small pages using the power of the links!
> The above principles can be used to write an operating system so that
> the same simple method is used from bottom to top. This is the idea of
> Plexos (Plexus Operating System) on which I am currently working
> (a plexus is a network of nerves). It is possibly true that these
> principles should also be built into the manufacture of
> microprocessors. If so then it is sad this has not already been done.
> Surely we are not utilising the power already waiting in the Web
> structure itself. Hypertext works because it mimics the
> hyper-dimensional linking in the brain. Its sucess is not to do with
> "text". After all you can link from jpg's too. But the brain also
> "processes" inside the neurons and this is not mimiced on the Web. If
> we think of a neuron as a Web page then processing needs adding within
> the page. A rough first shot at this has been taken above. The brain
> has been under development for X billion years so shouldn't we have
> tried to copy that ages ago instead of creating BASIC, C, FORTRAN, and
> now JAVA!!!
Some HTML pages process also, but this is rightly considered to be a
server-side issue & it's called CGI. Many sites generate custom pages
based on what the server sees from the client app.
I think the better idea is to put the burden on the servers. They are
the fast ones, and with IBM migrating their AS/400 & RISC mainframes
to "easy" Internet connections the whole idea of doing something on
the client side is just not fruitful. Home users have shown a bit of
reluctance to go out and buy the very fastest and latest
processors-they just don't need that much speed. Heck, I don't, and
can't imagine spending $10,000 or more for a home computer just so my
browser will simulate nn's or ga's!
> Sorry for the long post. I couldn't make it perfect but I posted it
> anyway.
> What do you think?
I know there are already plug-ins/Java apps (maybe?) for using TEX
inside of HTML docs, which is a nice way of extending HTML. I also
think that HTML will move closer to TEX in the next few years. After
all, TEX is meant to be a platform-independent publishing tool-sounds
a bit like HTML to me!
As far as adding AI-specific things to the HTML standard it's is an
extremely bad idea. Most people (99.999%) won't know what to do with
those tags, and it defeats the universality of HTML, as well as it's
ease of use. I DO support the idea of plug-ins, mainly because no one
is forced to install them! This puts the focus right back on the place
it is now-the end user.
|