lclFil Mackay

John Middlemas

Home dic
lcwww-html@w3.org
ltlFil Mackay
Subject: RE: Web neurons To: Fil Mackay <fil@mpce.mq.edu.au> Cc: www-html@w3.org, hyper-theory@math.byu.edu >What you're proposing is not HTML. HTML is about delivering pages of content >for presentation. Whatever HTML is about can be changed. >Why don't you make something new.. a document format which is appropriate to your >use. I may have to, but I must try HTML first to try and keep things integrated. The document format I am looking at is a multi column table page, with information about all the source, destination, creator and home pages plus details of INPUTS. Timing information and persistent variables could also be recorded in the table. All this can be expressed quite well using an HTML table. >You've ditched HTML anyway, because >nobody is going to be able to render your HTML-variant anyway! As an example, let's say you wanted to type in loads of items to an (extended) HTML form and produce a fancy display HTML page rendered with all the items neatly arranged. You would have another extension to HTML that read the form inputs into variables. These could then be FIRED to a mini-web of neuron pages that processed and arranged things into the final display page. All this would be in the background and invisible until the final page was passed back to the browser for rendering in the normal way. I can't see your point? I suppose you could probably do the same with java. But with the Web neuron method you would be programming the system using the Web neuron table pages mentioned above, hopefully viewed through your browser. You could hop from table to table editing, checking, tracing and debugging. Programming could become like surfing rather than a torture of cryptic keywords, punctuation, and nest within unreadable nest. You would be able to follow exactly what was going on. To enable this for local use might only require a special plug-in to Netscape or some extra features in Navigator Gold. A global multi-server system would require more thought. >>>Sounds like structured programming in HTML? Why not use Java..? > >>I don't believe in structured programming. > >Sounds like a contradiction in terms :-) > >>All that is happening is that >>inputs are being passed from one place to another, there is no compulsory >>return as in standard languages. In fact it is more like a "GOTO" than >>anything structured. The structure is in the Web itself. > >Sort of like a petri-net? It's sort of like a neural-network. If that is structured then so are Web neurons. What's a petri-net? >>I don't know much about Java but I don't believe it operates within a Web >>page which is the important point (I haven't got Windows 95 or NT). >Uhh.. Java operates within web-pages (applets) and as stand-alone applications >(in their own window). What environment do you run on? poor old Windows 3.1 I have a dread of upgrading. Everything always blew up before. Can you view java as lots of small Web page tables and trace the logic as I mentioned above? or is it gobbledegook as usual? >>>Don't tell me you actually LIKE HTML? :) >>Hmmm.. I don't hate it like I do C and BASIC. It's simple to produce a quick >>page or two. I wouldn't call it a language because it doesn't process - yet. >All a language is, is a set of symbols which are animated by some other process >(eg. Assembler is animated by a CPU). HTML (or C or BASIC) don't actually >process.. they afford the ability, but of differing types. What I mean is HTML does not possess logic flow control statements e.g CALL, GOSUB, IF ELSE etc. >>>The scenario you are painting IMHO would be more appropriate to distributed >>>Java, where the tools are already there (or will be), and are easy to use >>Why complicate by adding another language when all that is needed is some >>modification to HTML. Integration is better than fragmentation. >Sometimes it's better to wipe the slate clean and use something better. Will Java do away with HTML then? How will pages be rendered? I don't believe Java has any neural basis. I would love to start from scratch but is there time? >With this ongoing browser/OS debate, I am convinced that "application" and >"operating system" are completely relative. Everything below you (no matter who >'you' are) is the OS, and everything above you is an application. I agree. What is below can be built from bottom up on neural principles. What is above leads to you and me, the observers, and we are built on neural principles too. So it is all the same stuff - we've all just been misled by Microsoft etc. So let's cut out all the multi-levels below (Windows XX.X etc) and concentrate on basic Web principles. It is not completely absurd that an OS could built on HTML with suitable extensions, but I tend to agree that the slate should be wiped. Having wiped it what are we left with... We know from the Web that we need hyper-links and a near infinite address space. But where do we put the logic? I did suggest ..... Maybe I'm bitter but I spent years trawling this language after that - each one taking ages to learn - each one doing just about the same thing - each one blocking you one way or another - hundreds of hours wasted due to untraceability. >I think it's quite natural you think it is an OS, but that dosen't mean that it >really is. (or more correctly.. other people will agree). If what you said above is true that it's all relative - then the brain is an OS and if we copy it on computer we should have an OS. >>I know it's a bit radical but the system can also be used on your local >>machine without referring to external URL's. Aren't they planning to >>dramatically increase the number of possible URL's soon. >No, we already have an infinite number of URL's. All a URL is, is a string of >characters.. relatively infinite. >>Something like >>there will be 10 million URL's available per sq metrer? >I think you're referring to IP addresses, when TCP/IP gets upgraded. Yes, I am completely wrong on this (and a bit stupid), I did mean IP addresses when it is URL's that actually locate the page. >I think the resource issues are more relevant to the server. At the moment, every >web page is stored as a file. Each file takes up a minimum of 16k of space, no >matter how little it is. >Sounds like you are planning on a LOT of pages :) I don't follow this. I checked my .htm file sizes and some are as low as a few humnred bytes? >>>Seriously, web 'pages' were designed to be just that - pages of >>>text/content. Linear and all that stuff. >>Surely Web pages are non-linear because they have hyper-links. >A 'page' is linear. If the web content wasn't linear, then we'd have a network of >symbols delivered to our browser with no 'start' and no 'finish'. Instead we have >chosen to use a SGML-derivative which describes the content from start to end. >Our whole basis for this WWW thing has been linear. >Just because we have 'warp holes' placed within to the content (anchors) which >takes us to goodness-knows-where (links), does not make the medium non-linear >(IMHO anyway). >My opinion is that a non-linear medium has a lot more than the ability to move >around linear spaces. I think the actual _content_ should be represented in a >non-linear form. Interesting. The brain is non-linear, and you can model the brain on the Web (with suitable HTML extensions) therefore the Web is non-linear or potentially so (IMHO). A series of pages hyper linked in a chain can model a straight line with 1 link in, 1 link out of each page. With 2 links in, 2 links out, an area can be modeled and with 3 in, 3 out, a volume. From then on just by adding more links you can model any n-dimensional space. In addition by linking the ends of say the 1-D straight line chain you have modeled a curved space. I'm not an expert but this looks like non-linear behaviour to me, and that's without any HTML extensions. >I see using HTML to represent content in the manner you are describing, as >hacking Newtonian physics to describe relativity. What you're better off doing >is throwing it out, and going with E=MC^2. >My point is, don't use HTML for the hell of it - only where appropriate. >>The FIRE command and inputs can be appended to a standard URL request. If >>there is no FIRE command then the target page is returned by the server as >>normal therefore it should be backwards compatible. If there is a FIRE >>command then the server behaves very differently. >How is the document 'returned'? What is the difference between a FIRE and non-FIRE >request? (in terms of what actually gets returned) When you click on a link and the page you requested contains no FIRE commands or any other extension commands that might inhibit normal page return, then the page appears on your screen in the normal way. This would be the non-FIRE case. It might be wise to include a:- <CODE> HTML Extension at say the top of the page so that the target server could easily see if there were any FIRE or other logic flow commands in the page. If the page you requested contained a FIRE command (or other logic flow commands) there are at least two possibilities. Firstly, the page could be returned to the caller as usual and the calling browser/server could handle the FIRE command etc. Secondly, the target server could handle the FIRE command on the spot. Let's assume the latter since it's more distributed. Also there could be a:- <NO_RETURN> HTML Extension in the HTML which told the target server to return nothing to the caller. If the page consisted say of just two FIRE commands with a NO_RETURN then all that happens is that the target server requests both pages pointed to by the FIRE commands and passes on any INPUTS (see original post). If these two pages also contain two FIRE commands and a NO_RETURN then I am sure you can see that a chain reaction is quickly generated with many servers being involved and the original caller getting nothing back. However, somewhere along the line a page could return something to the original caller because the INPUTS could keep transmitting the caller's IP address downstream. The complexities, rewards, and possibilities are endless. This is why neural networks are successful, although this idea is not really one. There are many other combinations etc that need looking into but I think the basic principle is sound. These few simple extensions (or similar) are all that is needed to light up the Web. Don't you want that? >>>> Hypertext works because it mimics the hyper-dimensional linking in >>>>the brain. Its sucess is not to do with "text". After all you can link from >>>>jpg's too. But the brain also "processes" inside the neurons and this is not >>>>mimiced on the Web. If we think of a neuron as a Web page then processing >>>>needs adding within the page. A rough first shot at this has been taken >>>>above. > >>>Why not use a neural net model to represent content/the web? Then you could >>>build all these structures? > >>I don't know how to. > >Why not use a simple table: > >ID, Description, FromID, ToID > >This table is able to model a neural network. Each entry is a node in the network, and >can potentially be a link (using FromID, ToID). Now we're getting somewhere. This is the sort of thing I am trying at the assembler level although didn't know it was a neural-net model. But couldn't we also use HTML tables - perhaps one per page? How would you implement your idea to produce say the sort of distributed chain reaction I mentioned above and also keep normal surfing ability, which people won't want to lose unless there's a replacement. Should there be some input and output levels as well: ID, Description, (FromID, input_level) (ToID, output_level) After all each neuron in the brain has inputs and outputs that pulse at variable frequencies. What about the logic that relates inputs, and neuron memory to outputs? >>>Hope I didn't offend too much. I only meant to offend a little. :) >>Only the bit about the Web crowd, but why do you like to offend at all? >Because offending (a little) makes people stand up and say what they really >think. It brings out the real issues.. sorry if others don't agree. Everyone has their own philosophy. Thanks for your interest, keep em coming. I'm off for a cup of tea.