There seems to be some confusion here around the R in FRP. I’ve usually seen it expand to Functional Reactive Programming, a formulation which I believe dates back to work by Conal Elliot and Paul Hudak in 1997, and whose influence I believe is explicitly acknowledged by frameworks like, um, React. The OotTP paper is the first I’ve seen to use the term Functional Relational Programming, and of course they cite Elliot and Hudak.

There are lots of possible reasons why the OotTP work wasn’t especially influential. Acronym shadowing might be a minor one.

The “what does FRP mean” problem is even worse than that: Elliot’s FRP idea was about “behaviours”, which are values/functions parameterised by time. To ‘run’ these programs we sample them for different time parameters, e.g. game(0), game(1/60), game(2/60), game(3/60), etc. for the frames of a 60 frame per second game. We can even use different sample rates for different parts, e.g. if a game’s physics engine is expensive we might wrap it in a behaviour which never samples the actual engine faster than 10 times per second, and just linearly interpolates between those at other times.

The problem is, the continuous/parameterised nature of FRP got mostly ignored, so FRP has instead come to mean discrete-time event handling (e.g. Elm, React, …). Elliot’s even distanced himself from the term these days.

PS: One reason continuous-time FRP didn’t originally catch on was it allows past behaviours to be queried, which prevented implementations from ever garbage-collecting old results, just in case they ever got queried at some point. Newer approaches, like FRP Now fix this issue.

The fact that they’re already using continuous operations gives me an project idea for whoever wants to one-up it. They could build it as a dedicated, analog computer. The game doesn’t have a lot of functions or value range either. Should make it easier. It will then run this in real time on real values using hardly any circuitry or watts. Could even be turned into a toy appliance if combined with a screen. Something to look at like the old, lava lamps.

An analog electronic computer can model small (whole!) numbers of continuous variables and relations between them, but I’ve never seen one that can actually model a continuous space of variable points. For that we’d use real chemistry: the most famous are the B-Z reactions, but the generic term is reaction-diffusion system. Of course, the chemicals are made up of discrete molecules, but their motion is continuous at least down to the Planck length!

This Python thing isn’t properly continuous either, of course: it’s using floating point numbers for the values, and a numpy mrange for a 2D cell grid. The neighborhoods are circular discs which cover a relatively high number of cells, thereby approximating a continuous space. It’s a little ironic, because Conway explicitly described Life as a discretization of a differential equation system.

If you think this stuff is cool, check out some of the work on “artificial chemistry” systems composed of discrete particles which move and interact in a continuous space. My personal favorite examples are from Sayama’s “swarm chemistry” experiments, but I haven’t really kept up with the field.

As with most stuff, there’s already a standard for that. One company, Surety, even puts the hash of their timestamp ledger (hash chains) in the New York Times to create a paper trail. I’m sure the decentralized checking part could be scaled horizontally a bit without much change in protocol or energy usage. The individual operations are still simple enough to do on chips that are a few bucks each.

The big feature that Bitcoin and other blockchains bring to the table is decentralization. If you can rely on a company for stewardship of your ledger, then by all means use a permissioned database like Surety does.

On the trusted timestamping page you linked, if you skip to the decentralized section, you can see it immediately starts talking about Bitcoin.

I’m not sure how much Surety’s service costs, but piggybacking on the Bitcoin or Ethereum blockchains is likely far cheaper. Here is a tutorial on how to store a message as an Ethereum contract. The cost is variable with the string length, but in this case only cost about $0.20. It works by deploying a solidity contract that is just a couple string variables. The output is observable on etherscan.

In my model, several foundations in different countries run by different people would agree on a protocol. It would store stuff in SQLite, FoundationDB, or something similarly fast/resilient. A web or app server with plenty of cache would give snapshots of the ledgers. They’d charge a fixed price for bandwidth and storage which could go up as the tech improves.

This setup for something small like hashes with a niche audience could run on $5/mo VM’s. Even dedicated servers, 5-way redundancy with years of compute, storage, and bandwidth would be just over a $1,000 a month. The components theyd use are so vanilla the admins could be part-time. How much does Ethereum or Bitcoin cost in comparison?

Check it out, this message cost $0.80. Zero sysadmin effort on my part due to leveraging a preexisting system. Also, the message won’t vanish if I stop paying VPS bills.

If you’re a large corporation that wants to timestamp thousands or millions of messages, the centralized approach could very well be cheaper. For me, verifying maybe a handful of messages per year, it’s way easier to piggyback on a large blockchain project.

That’s a decent point. If you’re just externalizing and aimjng for low cost, you can post the messages in threads of diverse forums, Pastebin, etc. I used to do that with hashes on blogs. Never cost a cent.

I should have worded that differently… I don’t timestamp messages all that often. What I meant to convey is that $1000/mo is definitely overkill for anyone with intermittent needs.

Pastebin and forum posts are fine, but centralized. If Pastebin ever goes down, or starts manipulating old posts, then the integrity of your verification is compromised. Embedding the message in Ethereum’s blockchain is a much stronger guarantee of permanence and immutability.

I should have worded that differently… I don’t timestamp messages all that often. What I meant to convey is that $1000/mo is definitely overkill for anyone with intermittent need

The $1,000/mo is for the hardware and bandwidth to run the alternative to a blockchain. In the blockchain, you’re a user that pays for a tiny portion that you use. In the alternative, you’d similarly pay for a tiny portion that you use. Maybe a membership fee that covers general cost of operations with you paying the usage parts at cost. I gave the examples of $5 VM’s to illustrate the difference between whatever Bitcoin is doing for mining or transactions. I imagine it takes a bit more hardware than $5/mo.

The other article today said companies were paying $10,000 a unit for what supports this system. My hypothesis was getting orders of magnitude better performance with a year of usage at the same price with five-way checking. Adding actors that don’t trust each other just adds small amounts to the system without dragging down the speed of its main DB’s. Whereas, the folks buying the ASIC’s are spending tens of millions to support almost nothing in terms of transactions. The traditional tech is so cheap that I was using blogs to do my version of it. They didn’t even notice. That’s the difference between crypto-currency tech and traditional tech w/ decentralized checking.

Pastebin and forum posts are fine, but centralized. If Pastebin ever goes down, or starts manipulating old posts, then the integrity of your verification is compromised. Embedding the message in Ethereum’s blockchain is a much stronger guarantee of permanence and immutability.

It doesn’t solve the permanence problem, but just signing text is sufficient to address tampering, which doesn’t use a lot of electricity. So is being permanent the selling point? There is also ipfs which doesn’t require PoW but is decentralized, would that + signing be sufficient for your needs?

Basically, I’m still struggling to figure out what the blockchain does that makes the excessive energy usage worthwhile. Maybe I’m just being narrow minded, but I still only really see financial speculation as the primary motivator, so if that becomes unviable, why would anyone continue to run a bitcoin node (and there goes the permanence?)

Oh Lord… they gotten bitten by the bug. No surprise, though, since it’s a fad with momentum and lots of money. I expect any company that can do a blockchain product to build one just to make money off it. Given their prior work, there’s little reason to think they actually needed a blockchain vs hash chains with distributed checking and/or HSM’s. Just cashing in. ;)

Btw, do check out that Functional-Relational slide deck I submitted. It shows the Out of the Tar Pit solution is essentially what the new, GUI frameworks are doing. It was just years ahead. So, maybe some practical uses for some version of their model.

Unlike traditional approaches that depend on asymmetric key cryptography, KSI uses only hash-function cryptography, allowing verification to rely only on the security of hash-functions and the availability of the history of cryptologically linked root hashes (the blockchain).

I hear cryptocurrency people touting Estonia’s BLOCKCHAIN REVOLUTION as great news for Blockchain, and even great news for cryptocurrency. It’s not even a blockchain.

I mean, I have no reason to think there’s anything wrong with it. I’m sure it does its job just fine. But goodness me, it’s the greatest marketing success “blockchain” the buzzword ever saw.

If anything, it was a great way to show we didnt need a blockchain when our older concepts were working fine. They might benefit by using the buzzword. Yet, such misleading usage just reinforces the phenomenon where the BS spreads further.

Im not even sure it’s reversible at any level given these fads usually either level off or implode with the name and reputation damage permanently attached to whatever the name touched. AI Winter, expert systems, and Common LISP are some of best examples.

There’s probably a post I need to write on this topic: basically, we’re going to see a resurgence in the popularity of linked lists with hashes, and they’re going to be branded “blockchain(tm)”. There are a few non-bogus projects along these lines, but it’s not so great actually and in all cases they should have just used a frickin database.

Likely case, we get mostly-working systems that have an eternally painful “blockchain(tm)” implementation at the core that can’t easily be replaced by something sane.

Sure thing! Trusted timestamping is actually one of my goto examples for hash-chain-using tech that predates blockchain craze. What timestamping-on-blockchain folks hope to achieve is what such companies have been doing reliably and efficiently for years now. Better to just invest in and improve on efficient models that already work.

The thing I push is centralized, standard ledgers with decentralized checking. For Surety done that way, it would take you bribing all the checkers. Alternatively, the HSM’s can mitigate some of the insider risk.

I feel like the basic message here is extremely important and always worth repeating and elaborating. The delivery is well done too. But the specific content? The author’s worried about giving the DoD (and by extension who knows what other purveyors of violence) greater capability to locate cell phones? Umm… in case anybody else missed the multiple memos on this one:

A good read, for sure. And some good ideas. But the authors only focus on technical factors, as though software were developed exclusively by programmers for their own purposes. They don’t address, for example, Conway’s Law or any other sources of complexity which don’t originate in the development process itself. They talk about formalizing requirements, but not where the requirements come from or how they got to be the way they are, or how they change over the course of development.

It’s certainly easier to frame the issue as being about technical problems and technical solutions. And there’s certainly plenty to talk about in that frame. But technological determinism by itself usually doesn’t have much predictive or explanatory power, which is why these kind of accounts have largely been abandoned by professional historians and sociologists who study technology. Even amateur software historians (who are doing most of the work!) typically point to business, marketing, or economic factors as being decisive influences in the development of the technologies they document.

Take your favorite “radically simple” system: say APL, or Forth, or Oberon, or Smalltalk or whatever. Step away from the shiny stuff and look at the people and organizations involved: who actually developed it, who paid for it, who used it and what they used it for. Then do the same for whatever “typically complex” web-app or C++ game or Free Software OS or government payroll system or whatever. The differences may be instructive.

I’ll start. The big difference between simple systems and typically complex web apps is scale. Small codebases can do more with fewer team members. They become more likely to have better programmers. They suffer less from knowledge evaporation due to people leaving. They hire less, and so they tend to have fewer layers of organization. This keeps Conway’s Law from kicking in. The extrinsic motivational factors of money, raises and promotion don’t swamp intrinsic motivation as much.

I’ve gained a lot of appreciation over the past decade for the difference between technical and social problems. But in this instance the best solution for the social problem seems to be a technical solution: do more with less (people, code, concerns, etc., etc.). It doesn’t matter what weird thing you have to do to keep things cosy. Maybe you decide to type a lot of parentheses. Or you stop naming your variables (Forth). Or you give up text files (Smalltalk).

Once you have a simple system, the challenge becomes to keep the scale small over time, and in spite of adoption. I think[1] that’s where Lisp ‘failed’; of all your examples Lisp is the only one to have tasted a reasonable amount of adoption (for a short period). It reacted to it by growing a big tent. People tend to blame the fragmentation of Lisp dialects. I think that was fine. What killed Lisp was the later attempt to unify all the dialects into a single standard/community. To allow all existing Lisp programs to interoperate with each other. Without modification. Lisp is super easy to modify, why would you fear asking people to modify Lisp code?

Perhaps the original mistake was the name Lisp itself. Different Lisp dialects can differ as greatly as imperative languages. Why use a common name when all you share is s-expressions?

A certain amount of curmudgeonly misanthropism in a community can be a very good thing.

[1] I’m just a young whippersnapper coming in after the fact with my half-assed pontificating, etc., etc. I don’t mean to side-track a discussion of complexity with Yet Another Flamewar About Lisp. (Though I’d appreciate any history lessons!)

We now examine a simple example FRP system. […]
To keep things simple, this system operates under some restrictions:

Sales only — no rentals / lettings

People only have one home, and the owners reside at the property they are selling

Rooms are perfectly rectangular

Offer acceptance is binding (ie an accepted offer constitutes a sale)

This kind of toy example makes their observations on software complexity in general harder to take seriously.
It reminds me of the hoary genre of “spherical cow” jokes. All of those simplifying assumptions (and no doubt plenty more unstated ones!) make their example system more or less completely useless to an actual real estate business.

I agree. Especially on No.‘s 2-4 since they represent situations that either don’t map cleanly to a neat model or just ignore the corner cases that real systems can’t ignore. The models always need to be tested with the ugly requirements on top of the easy ones.

Last year, ST didn’t support ligatures, and I wanted to try them out. So I went off and used first Atom and then VS Code for a month or two. Best thing I can say about either is that they have quite a vibrant plugin ecosystem. When I decided that code ligatures were actually kind of a dumb idea (and definitely not worth the Electron bloat, MS spyware, and assorted rough edges), I came back to ST3 with new appreciation for its relative simplicity, stability and performance. Now, I see ligature support added in 3.1: ain’t that always the way?

Along the way I met my new favorite console editor, vis. Perhaps someday it will replace my GUI editor entirely.

Have you looked at Kakoune? I switched from vis to Kakoune a few months ago; I found Kakoune’s editing commands easier to learn than vis’ structural-regular-expression syntax, and I prefer the way Kakoune supports multiple editor windows that I can manage with my normal window manager, over vis’ (and Vim’s) window-splitting system.

In my spare time I’m working on a GTK+ based front-end for Kakoune, at least partially because I wanted code ligature support in my editor. :)

Then, re-wrapping the current paragraph is <a-a>p=. With a bit of effort, you could probably wrap that into an insert-mode mapping for <a-q>. par is a fairly smart third-party rewrapping tool, smarter than coreutils’ fmt, but not quite as smart as Vim’s built-in formatting feature.

If you really want to look at the fundamental science and history of the major network protocols (and maybe get a glimpse at why there’s so little innovation in this area) I highly recommend John Day. “Patterns in Network Architecture: A Return to Fundamentals”, Prentice Hall. It’s out of print, but not too hard to find. Most of the material is online as well, on the Pouzin Society website. You should be aware that it’s not exactly the most practical book, since Day’s a bit of an iconoclast, but he’s also an old-timer who’s experienced much first-hand. Very interesting read. I think you’d enjoy it.

+1 for this. I’m using TaPL in my PL class this quarter and it’s awesome. Super well written and ML is great for this class - the work involves writing successively more complex interpreters for successively more complex toy languages. We’re not sticking strictly to the book, but the sections our prof has pointed us to have been great.

It’s really unfortunate that the idea of a live coding remains niche, and most programmers haven’t had exposure to this style of development. Working in an environment where you can inspect and change any aspect of the system at runtime is the most satisfying coding experience in my opinion.

Coding hackers tell me they want to understand, change, and improve on anything they can get their hands on. The live coding systems let them do that to their whole, running system. It just seems like the two are a natural fit. It makes it so much stranger to me to see such people use tools that limit them so much.

It’s probably why people have tended to think of Emacs, Common Lisp, and Smalltalk (and Forth?) in relation to the “quality without a name” described in Zen and the Art of Motorcycle Maintenance, or the living quality of buildings described by Christopher Alexander.

I mostly agree. It doesn’t have the same deep metaprogramming aspect, but the actor model is a much cooler form of concurrency than the languages I listed, and things like supervision hierarchies have quite a strong sense of being “alive.”

Yeah, I don’t know if it fits since I don’t use it. I will say that making distributed, concurrent systems easier and more robust by design with ability to update running systems makes it pretty close to the idea. It’s kind of in its own category in my mind where the LISP’s were focused on max flexibility where things like Erlang or Ada are focused on max reliability. Erlang improved on things like Ada in its category by being more high-level, distributed, the updates, etc.

Now, it might be interesting to combine the two. I swore someone… (checks bookmarks) Oh yeah, it was LISP-Flavored Erlang. I can’t evaluate further about whether it truly has benefits of typical LISP workflow and Erlang since I don’t know Erlang or LFE. Looked really awesome conceptually when I found it. Anyone else chime in?

Pervasively dynamic environments are a two-edged sword: with great power comes great responsibility. It’s trivial to render a Smalltalk image completely unusable with something as simple as false become: true. PicoLisp has similar capabilities. Most Forths do too. There’s something to be said for isolating and stabilizing some basic parts of the system: such limitations give us a margin of safety!

Lisp, Smalltalk, and Oberon are all good examples of the programming-language-as-an-operating-system paradigm.

Today you still see it somewhat in the more enthusiastic Emacs users who haven’t seen the obvious superiority of vi: plenty of people enter Emacs on login and don’t leave it until logout.

The most modern incarnation, though is the combination of JavaScript and HTML: the browser has become, in many ways, the operating system for many people. My wife has a Chromebook and it fulfills 100% of her computing needs.

At the other end of the complexity gradient, there are tiny self-hosting Forth systems with metacircular evaluation kernels. Beautiful, but delicate!

One practical distinction to draw between these systems is how easy or difficult it is to screw up the whole system (and, conversely, to recover from such screw-ups.) Most of ChromeOS by far is effectively sealed off from the user, even more so than a conventional OS. ChromeOS blurs the distinction between a browser and an OS, which makes some sense in the modern era: it’s a distinction that simply doesn’t matter to many users.

If humans are going to be reading, supporting, and re-writing code, I don’t see why we’d want to eschew one’s strongest language, say, English, in favor of one that reads like hieroglyphics.

Look at what humans already do in domains that require precision, as programming does. “reads like hieroglyphics” is a fair description of a lot of mathematical notation, yet mathematicians have long preferred symbology to English - if you think programmers shouldn’t then you should be able to explain why mathematicians do. Lawyers write “English” but in a famously stilted style, full of clunky standardized constructions, to the point that it’s sometimes considered a distinct dialect (“legalese”) and one could reasonably ask whether a variant that used a symbol-based language would be more readable. And of course it bears remembering that the most popular human first language is notated not with alphabetics but with ideograms where distinct concepts generally have their own distinct symbols.

Even within programming, humans who are explaining an algorithm to other humans tend not to use English but rather a mix of “pseudocode” (sometimes likened to Python) and mathematical notation.

Erlang is notorious for being ‘ugly,’ but I wonder what that’s all about. Truly. I like to think most my Erlang code is composed of English sentences, lumped together into paragraphs, and contained in solitary modules. It’s familiar, unsurprising, and quite beautiful when one’s naming is in top form.

Really? Most languages allow sensible plain English names for concepts. The part of Erlang that’s notoriously ugly - and the part that makes it read very unlike English - is its unusual punctuation style. If the author really finds Erlang English-like to read, I can only assume this is because they’re much more familiar with Erlang than other languages, rather than the language being objectively more English-like than, say, Python or Java.

Similarly in code, some parts are more like formulae, some are more like prose, and some are more like tables or figures… and it’s interesting to consider separate syntaxes for these different types of definitions.

Have a look at the Inform 7 manual’s section on equations, for an example. Here is a (formal, compiling, working) definition of what should happen when the player types push cannonball (I’ve used bullet lists in Markdown to get indentation without monospace):

Equation - Newton’s Second Law

F=ma

where F is a force, m is a mass, a is an acceleration.

Equation - Principle of Conservation of Energy

mgh = mv^2/2

where m is a mass, h is a length, v is a velocity, and g is the acceleration due to gravity.

Equation - Galilean Equation for a Falling Body

v = gt

where g is the acceleration due to gravity, v is a velocity, and t is an elapsed time.

Instead of pushing the cannon ball:

let the falling body be the cannon ball;

let m be the mass of the falling body;

let h be 1.2m;

let F be given by Newton’s Second Law where a is the acceleration due to gravity;

let v be given by the Principle of Conservation of Energy;

let t be given by the Galilean Equation for a Falling Body;

say “You push [the falling body] off the bench, at a height of [h], and, subject to a downward force of [F], it falls. [t to the nearest 0.01s] later, this mass of [m] hits the floor at [v].”;

now the falling body is in the location.

(Yes, the Inform 7 compiler will solve equations for you. Why aren’t normal programming languages capable of this kind of high school math? Are we living in some kind of weird bubble?)

Of course it can be solved in any general purpose programming language, but none of them feature dimensional analysis or algebraic equation solving out of the box in a convenient and natural way… yet Inform 7 does, oddly.

I find this interesting because that stuff would seem to be the most obvious use case for computing machines from, like, a 1930s perspective.

I completely agree. Python (and maybe Basic?) are close, but even then they fall dramatically short of “language as code” as Inform 7 does. I wonder what keeps Inform 7 from becoming a more general purpose programming language?

Also, English isn’t everyone’s strongest language. As natural languages go, it’s pretty complex and inconsistent. If you want your code to be understood by people who are more familiar and comfortable with other natural languages, then your own familiarity with English isn’t necessarily such an advantage in writing code.