The User Liberation Front

From the beginning this blog had the subtitle “Dispatches from the Programmer Liberation Front”. I have changed it to the User Liberation Front. Really this change has been building for years now. I started out wanting to fix programming, to help realize its full potential, and to uplift our tribe of nerdy misfits. I slowly realized that the heart of the problem is not our technology but our culture itself. Programming sucks because we like it that way. It entertains us with puzzles; it affirms our differences from the outgroup; it rewards us with power and wealth. Enough. I am now an anti-programmer.

The User Liberation Front works to put the power of computers into the hands of users, freeing them from the domination of technologists and corporations. Spreadsheets do this today. We will spread the freedom of spreadsheets to other domains of software, so that, users no more, all can create the software they need.

I am not alone in this. Are you with us?

Share this:

11 Replies to “The User Liberation Front”

Yes! And I’ve been with ya’ll since 1981 when I was hired by AT&T and Knight Ridder News to design the “authoring system” for the nationwide rollout of what promised to be the first mass market network: VIEWTRON. The long history of only-partially-successful attempts at Turing-complete user-malleable interfaces includes not only spreadsheets (never quite Turing-complete unless you include scripting backends), but famous candidates such as Smalltalk and, less famously, relational database interfaces.

My experience with PLATO’s “authoring system” — intended to make it so that public school teachers could write computer based educational “courseware” — led me to, at VIEWTRON, take a hard look at Smalltalk, spreadsheets, logic programming, functional programming, distributed programming, what is now known as reactive programming and relational db interface research.

There is an underlying relational-reactive structure to all of this that must be respected in order to construct user interfaces reaching all the way down into the guts of Turing completeness without violating the user’s perspective.

Everything starts with user-specified cases (in spreadsheet terms, rows in a table) and, as you have started to penetrate with your experiments, the interface facilitates discovery of rules that make sense to the user:

Program Induction

The ultimate extension of this process gets us straight to the heart of Artificial General Intelligence theory:

Solomonoff Induction

That is to say, the user interface must start from the user’s experiences and facilitate the application of Ockham’s Razor to produce their most parsimonious comprehension, approximating discovery of their (uncomputable) Kolmogorov Complexity.

The discovery process starts purely relational (non-deterministic table) and incrementally approximates the functional/deterministic rules required by a the underlying Turing complete VM. Among experimental implementations of this kind of spreadsheet metaphor, the best I’ve seen is Kayia, and it fell _far_ short of the mark. Even so I put some money into it.

Here’s a test of whether you’ve got the right user interface:

Does units conversion (and, where applicable, dimensional analysis) fall out of the relational structure of the spreadsheet (tabular list of cases) rather than being “tacked on” as a semantic afterthought?

Arithmetic should be relational and structures, such as units/dimensions should induce from the columns required by the user’s perspective on his cases. Numbers (quantities) should emerge from case counts more closely related to what Bertrand Russell called “relation numbers” than dimensionless/pure numbers or set theoretic “types”. Their arithmetic should be more closely related to what he called “relation arithmetic”.

No, I do not believe in this movement. While I suspect that the current methods of inputting specific instructions into a computer are woefully inadequate and in need of a paradigm shift, the role of the “programmer” has never been about inputting instructions into a computer, it has been the process of translating end-user desire into the specific requirements of “what is really meant”, ad-absurdum, until those requirements are specific enough that they are already very close to what might be found in high-level programming languages with frameworks and libraries available.

You often give the example of spreadsheets, as if these are magical and liberating things which allow end-users to work without programmers, but in my experience those spreadsheets don’t get very far before they, too, require expert knowledge in order to ensure they are working correctly. I’ve worked roles where the majority of the job was debugging / refactoring spreadsheets, as a programmer, because spreadsheets (like any method of programming) hit a point where programmers, ie: people who think about the requirements and translate them into specific instructions, are required.

It just so happens that in most cases, when the point is reached where a programmer is required to help with a spreadsheet, that programmer will declare that these formulas would be easier to work with if they were stored in some other form, rather than having their definition tied to the interface which displays the results (ie, if nothing else, defining named functions, rather than re-typing the same formula repeatedly).

Rather than trying to make “programming” a thing that anyone can do, perhaps just focus on moving the point at which a programmer is required? ie: make a better spreadsheet program. People like spreadsheets. Spreadsheets are nice. It would be nicer if the line between “I need a programmer to help with this” and “I have outgrown the concept of a spreadsheet” were wider, or existed at all.

However, philosophically speaking, the fundamental problem is formalizing experience as knowledge aka turning data into information. This is what nervous systems do. Devising computer programs that assist in this process is the key problem. A proper definition of “knowledge” is key and I haven’t seen a better definition than that offered by Algorithmic Information Theory: The operational embodiment of data as the smallest program that outputs that data. Yes, this is a very hard problem that, in a nontrivial sense, is essence of science. Indeed, it is provably noncomputable. We have vast numbers of experts engaged in science for this precise reason (even though many are called “engineers” or even “programmers”). Nevertheless it _is_ reducible to lossless compression and since people (and animals) do this as part of their cognition, it is entirely reasonable to provide tools that help them organize their raw experiences in such a way as to bring to bear prior expert knowledge toward the lossless compression of their experiences.

The reason why things are as complicated as they are is a result of the circle of improvement: Things start simple but not powerful enough. Then then are improved incrementally, making them more and more complex. So in the end we have powerful but very hard to use stuff. Not because some “tribe of nerdy misfits” but because it’s very difficult to create things which are powerful and simple at the same time.

For example HTML: In the beginning it was so easy to use, nearly everybody was able to write a simple web-page. But we want colors, fonts, tables, images, etc. So lots of styles and attributes were invented and the once easy html became more and more cluttered. Until someone invented CSS to make things easy again – which kind of worked for a short time, but we needed better layouts, better control, etc and today we have CSS3 which is quite complex again.

The same is true for spreadsheets. Yes, the basic concepts are easy to grasp and enable non-programmers to build quite complex computations. But at some point this also becomes quite inaccessible because of the very complex expression language which modern spreadsheets have and also an additional “real” programming language like VB to solve additional problems.

Now it’s easy to prove me wrong: “Just” invent something which shows how to solve the problem of programming in a non-nerdy and tribal way. I’m aware of your prototypes and I like some ideas a lot, but they are still far from complete solutions and I suppose that by making them “complete” they would suffer the same problems as I described above.

And it’s no proof of some kind of “conspiracy of the nerds” that nobody has invented a solution, yet. It’s like people who see it as a proof of conspiracy that there is no cure for cancer yet. But maybe cancer is just a very hard problem to solve and it’s just because of that, that there is still no cure. And some problem may even be unsolvable.

So I think you’re barking up the wrong tree here. Better try to be positive and work on a real solution instead of blaming other people.

— “The reason why things are as complicated as they are is a result of the circle of improvement: Things start simple but not powerful enough. Then then are improved incrementally, making them more and more complex. So in the end we have powerful but very hard to use stuff.” —

That is utter, [profanity moderated].

The problem comes in five parts:

1. Pressure from stakeholder goons, particularly those holding MBAs and/or CPAs
2. The inherent desire of programmers to make their own jobs easier and/or more secure
3. The nerd tendency to be completely beguiled by new and shiny toys
4. Stout refusal of those brought up to algos to treat declarative code like an Actual Thing deserving of study and respect
5. The practice of blithely dumping as many cycles onto the end user’s hardware as can be managed, usually with little regard for what that hardware might actually be able to handle

The depredations of Google and SM platforms are also a huge part of the problem, but unfortunately only adjunct to the scope of the user experience over which any individual programmer not working for Google or an SM platform can actually exercise control.

I support your goals, and agree that “programming sucks”, but I think if a better tool came along, programmers would jump on it with glee. Most programmers I know are not power-hoarders and have a vague intuition of loathing about the state of the art. But they work on what their boss asks them to.

Maybe you could blame the profit-driven evolution of our industry to some extent, since there’s very little financial support to set out on a multi-year project to Change Everything. I took the self-funding route til my company went broke, and that has not been fun. But what we discovered along the way also kind of ruined me as a programmer in the “modern” paradigm. Once you uproot the entrenched assumptions about How Things Work and figure out which ones are good and which ones are toxic poison, going back to working with them for extended periods of time literally makes me physically ill.

Anyway. The whole stack needs to move into the database. The stack is packed chock full of structures with data in them. Databases are good at structures. The file system is not. The command line is not. The layers of, say, the web stack, don’t share a common information model, which is just, jaw-droppingly archaic, and yet the language of data, proven ubiquitous and general purpose, just sits there at our fingertips unused except for “application data”, whatever that is.

Instead the “state of the art” is working with latently-structured files in a hierarchical directory tree and accessing them with one-off command-line programs. It’s just so awful that it makes me cry for the poor bastards that have to spend their lives toiling in this swill, and the users that we are holding hostage. My sincere prayer to the gods of evolution is that I will live to see this awful paradigm’s demise, and that people will stop thinking of databases as a black box to store tuples, and see them as the most powerful tool humanity has ever invented for turning a shit show into a nice coherent organized system.

We have used the database to successfully model and organize every goddamn corner of the planet — except for our own programming stack. Somehow it is special. Somehow it is sacrosanct.

Our stack is not some special form of complexity. It can easily be represented as just more relations. All of it, even EBNF and code. Prove me wrong, name one part of our stack that can’t be trivially relationally modeled. We need to use the tool that has brought coherence to countless other domains across our planet. Put the whole stack in the database.

I am still slaving away in isolation on Aquameta, 100% convinced that datafication is the way to Fix It All, but quite weary of the blank stares that I get when I try to talk about a paradigm without files. Brilliant, open minded developers whom I know to be forward-thinking, and in total agreement that programming is rife with unnecessary complexity, still don’t get excited about the concept or even really see it. I can only guess why. But short of a compelling UI that actually solves real world problems, I don’t think I’m going to get much conversation or traction or support from the dev community.

But the UI is really just the icing on a many-layered cake, and just one graphical representation of an infinitely flexible underlying model, and as such, largely arbitrary. What’s way more exciting (to me) is that you can build tons of different UIs against the same model, and yet they can coexist and compete for eyeballs. That means evolution without disruption! That’s more powerful than getting the UI right.

Aquameta’s sample UI is getting closer, but since I’m competing with a web stack that has probably billions of hours put into it, not any one layer is particularly better than its competition out in production. The sum of the parts sure are, but… the parts are all still pretty rough around the edges. Here’s the latest demo video:

Anyway, files and directories are hopelessly idiotic. Grammar is schema. Code is data. Any series of bytes that aren’t in a database should not be considered “data”, much less “information”, in any modern sense. So the whole stack is just a bunch of duct tape and pocket lint. Have a nice day.

Back in 2015, I took a look at applying AquaMeta to a supply-chain application but it was in a conflict of interest (potential overlapping competitor) with its warehouse management system. We then looked at Kayia (cum Infinity) as an alternate route to prototyping the application and invested a small amount in advancing its development. However, it, too, ended up stalling out for lack of resources. The CEO of the supply-chain app business became disabled and the business is basically in hibernation.

Unearned wealth generated by network effects (positive network externalities) are at the foundation of civilization’s power. Any sustainable civilization will distribute those positive externalities to the people most responsible for defending property rights — generally young men (whom the civilization seeks to pacify as well as to compensate them) — and charge the wealthy a use fee for their property rights. However, as civilizations age, the wealthy get into control of these transfers and gradually replace their use fees with taxes on economic activities (income, capital gains, sales, value added, inheritance, etc.). This is private sector rent-seeking. Then, an unholy alliance arises with the priesthood/bureaucracy protecting property rights and rather than distributing the revenue to the young men defending property rights, it is captured by bureaucrats. This is public sector rent-seeking.

The older the civilization, the more power is centralized in the aforementioned forms of rent-seeking.

The result is stupid capital whether in the public sector or the private sector.

This realization came to me as part of my aforelinked work to privatize launch services with the Launch Services Purchase Act of 1990.

The only reason we might recover a viable space launch service industry is the DotCon era shook enough capital loose from Wall Street that some of it rained down on some “nerds” who happened to be pioneering a new regime of network effects where they captured the positive externalities: supply-chain/warehouse management in the case of Jeff Bezos and payment processing in the case of Elon Musk.

Add to this the advent of the microcomputer circa 1980, and the explosion in the number of “programmers” — coupled with Bill Gates capturing the network effects of that industry with his OS’s lock-in gatekeeping role between vendors and consumers of hardware and software — and you have a gawdawful mess. The H-1b visa fraud industry took that gawdawful mess to a millenial eschaton.

As an example: Since I actually helped Ray Ozzie with his computer science classes at the University of Illinois, when Gates turned the reigns of Microsoft over to him, I tried to get him to pay attention to the application of Ockham’s Razor in the form of Algorithmic Information Theory to ruthlessly remove the cruft from Windows and the entire suite of application programs. However, he was swallowed up by Microsoft’s priesthood that was then, and now is, taken over entirely by a culture far more adept at rent-seeking than anything in Western Civilization. That culture has captured almost all of the positive network externalities of the information industry. Far from using the network effect wealth to incentivize simplification, it is using it to, instead, increase the number of “programmers” certified by developing-world paper-mills. These “programmers” and this “capital” has no interest in simplifying anything. Since making things complicated doesn’t require effort — indeed it is the default — there is no need for a conspiracy of nerds to continue this trend. All that is necessary is maintaining public policies that protect rent-seeking niches arising from capture of positive network externalities.

Definitely living in Eric Hanson’s world over here. Yes, some programmers take pride in arcana but on my team it comes from haste, ignorance, and legacy. If only databases were a bit better with their structures and SQL was snapped out of existence.

If someone would somewhere be able to create a tool which “solves” programming and would make writing programs a lot more fast and easy, they would profit tremendously just by using this tool to outcompete anybody else and win the markets.

So why didn’t anybody do this? Why give startups (like Cris Grangers Eve project) lots of money to create such tools if nobody wants it? I simply don’t see any reason, just the obvious “it’s very hard, many have tried but none succeeded (yet)”.

Is it impossible? I don’t think so, but maybe I (and you and many others) are wrong. I still believe that there are ways to improve programming tremendously, but I don’t subscribe to the conspiracy theory that all the nerds in the world conspire to protect their advantage. I think that it’s just hard and that’s why nobody has done it yet.

Now why don’t people put more money into it? First: Huge amounts of money were already burned in such projects. And since most companies need to make money, they need to get stuff done instead putting money into research projects. But still lots research is done, but most simply doesn’t succeed so you will never hear from it.