IMO If we’d gotten Xanadu instead of the web, we’d still be stuck looking at the equivalent of 1997’s user interfaces. And thanks to TN’s obsession with copyright enforcement, by now (20 years later) the metadata would outweigh the data a hundred to one. It would be a nightmare.

Comparisons with the Web are limited because the Web has been stretched so far beyond its roots as a distributed hypertext system. I find Xanadu a lot more compelling when I consider very large collections of documents, e.g. an encyclopedia, dictionary, or textbook archive. Indeed, Nelson’s own examples tend to be non-fiction essays and criticism. What would a deeply annotated version of The Art of Computer Programming look like in a hypertext system with two-way links?

Rather than being an expansion of normal copyright enforcement, it’s just as much of a subversion as the GPL. Because every clip, no matter how small, is verified, and all assembly occurs on the target machine, derivative works literally cannot be shut down on copyright grounds & abuse of DRM isn’t possible.

Today, most data on the web is partial, degraded duplicates with no notation as to common origin. These duplicates cannot be combined or traced, because we don’t bother to point to a canonical form. Replacing them with metadata makes the total amount that needs to be stored much smaller.

Re: user interfaces – there has been no meaningful progress in mainstream UIs since 1977. There have been interesting fringe experiments, including Squeak’s Morphic, Plan9’s Rio, Xanadu’s own ZigZag, and Jef Raskin’s SwyftKey and ZUI, none of which have had any impact on things non-Lobsters-users have heard of. UI designs, as far as ‘regular users’ know, begin and end with things Alan Kay worked on in the 70s.

You can’t take the Xanadu web pages (which were mostly written in the 90s) & Xanadu open source releases (which were largely written in the 80s) as representative of Xanadu UI designs.

When I worked on the replacement for XanaduSpace (which still looks ostentatiously flashy today, despite being written in 2006), I was pushed hard on visual polish (which ultimately I couldn’t deliver – OpenGL simply doesn’t have the facilities to make it easy to render an entire bible in real 3d in a way where editing is as fast as on a 2d surface).

I personally disagree with some Xanadu UI design decisions, but primarily because they are too visually radical in ways that don’t make sense to me (like text editing in 3d) rather than because they resemble mid-90s web design (which I prefer – at least mid-90s web design worked, when it came to displaying minimally-formatted rich text on slow machines).

I could be misunderstanding something, but wouldn’t any implementation of a two-way link system (where each link must be aware of another) require either (A) a centralized authority or (B) majority decentralized consensus, necessarily opening the door to censorship?

Could Xanadu not be accomplished with some kind of file spec/protocol and rich transclusion rules (i.e. PDF but with a standard way to “refer” to other entities by hash)? Why are two-way links even necessary for the functioning of Xanadu as described?

Why are two-way links even necessary for the functioning of Xanadu as described?

Because Nelson thinks that the links should never get invalidated by a single end point, and because he thinks that nothing in the system should ever be copied, but always linked to or transcluded.

It’s an interesting thought. Think how the digital world would be if you could always find the original source of everything? What if the original creator would always get rewarded (even in a very small way) for being transcluded? It’s not necessarily a world I’d think is ideal, but interesting nevertheless.

Content-addressed networks like IPFS make it straightforward without centralized coordination. We don’t care about linking to the “original source” in the sense of “original host” – just that we have truly-immutable associations between address and content.

(I should note that, last I heard, IPFS wasn’t being used internally in Xanadu projects except experimentally. I pushed for it hard when I was involved, and so did Brewster Kahle. Mostly, we relied on conventional HTTP caches of documents originally fetched by HTTP, either storing them ourselves or relying on the Internet Archive to ensure they didn’t change under us. But, everybody involved is aware of IPFS so I expect IPFS support to eventually appear – probably once we can figure out how to coordinate a guaranteed minimum number of pinned copies of any given document.)

Two-way link systems don’t require either centralized authority or majority decentralized consensus. The sides don’t need to be “aware of each other” because they are together – the documents involved don’t need to know about the links they are associated by. (Consider something like RapGenius. The documents being annotated don’t know about the annotations. Nobody would see any annotations unless they checked for them. The same is true with hypertext.)

Two-way links are literally easier than one-way links. After all, you can create them without permission from any of the parties involved in the original documents, keep them secret and distribute them to only your friends, distribute “OJAS’s 100 BEST LINKS OF 2018” packs to show off how clever you are at connecting Proust to the Plan9 documentation, etc.

Meanwhile, since links usually refer to underlying source text in a transclusion (rather than the context in which the link was created), even the tail ends of links you yourself created can be meaningfully surprising & interesting when you read transcluded content in a different context.