The concept of "application" is so obtuse and obsolete, using computers or anything with "applications" is a terrible experience plagued with a multitude of systems that don't talk to each other, and reluctantly exchange any information between them.

Still in the pre-history of computing same as we were in 1965.Maybe closer to the end of pre-history, but pre-history nonetheless.

@h Remember that "application" and "appliance" ultimately have similar roots, if not the *same* root. When the Macintosh first came out, Apple foresaw it becoming an appliance in the future, and it is most regrettable that this has happened. Computers today, even Linux machines, are *appliances*.

In so far as the apps do what you want them to do, that's OK for people who don't care to know/learn about their computers. For the rest of us, it's a straight-jacket.

@h That's an unnecessarily narrow view of an appliance, in my estimation. MS, Apple, et. al. do not encourage people to poke around under the hook. Interfaces between programs are hard barriers. Computers today have "no user-serviceable parts inside." That, to me, makes them de facto appliances.

When my attempts to install Ubuntu on my old desktop bricked the motherboard, that's when I learned that any device that ships with an EULA is an appliance. Full stop.

@vertigo What I mean is that in the case of software, such barriers (or "appliancefication") is totally artificial due to design decisions and conventions that nobody agreed to accept, but we keep accepting anyway. For absolutely no good reason other than lack of vision.

@h I don't think it was lack of vision; it was the vision of putting computers in the hands of untrained masses that drove the interfaces we have today. There inlies the problem: untrained.

The untrained masses became either complacent, or worse, actively reveled in their ignorance. This is literally their point of view: "Why should I learn how to type these god-awful cryptic gobbledygook when I can just drag and drop these pretty pictures? Reading is hard! Let's go shopping!"

@vertigo You have a point with oversimplification of some things, but I don't think that cooperative programs necessarily have to be hard to use. They could also be drag and drop and they could work plugging things and pulling levers all the same.

@vertigo Sure, you probably shouldn't make cryptography software drag and drop where you need great precision, but for a number of commonly used tasks it shouldn't be so damn hard to connect the output of one program to the input of another.

It should be just like connecting pipes, a bit like IFFTTT. Any program to any other program.It's the hard barriers between programs (in theory written by at least somewhat trained people) that don't make sense at all.

1. Write a program that has some struct2. Enable it to send this struct to another program.(Serialising and sending over sockets is cheating. There is no need to waste cycles in serialisation between two programs running in the same memory space, on the same architecture)

@h Ahh, yeah, AmigaOS let you do that but only because it's a single-address-space OS without any kind of memory protection.

To do this in a Unix environment, you'd need to use shared memory interfaces, and some agreed upon means of two or more programs rendezvousing with each other to coordinate who has access to what data and when.

@h I also had the idea of porting GNU/Hurd as well, or MINIX 3, or Plan 9. But, in all honesty, I can easily get exec.library off the ground a lot sooner than I can get any other of these OSes off the ground.

Maybe I can bundle "dos.library" as well, albeit as a normal library, and not as a BCPL library.

So between dos.library and exec.library, I'd have a functional, if minimal, operating system kernel. I'd just need a reasonable user-land environment.

@vertigo OSes should have the equivalent of Go channels as part of the standard Kernel. At worst a message queue that is optimised for fast memory sharing. I think the use of shm is being discouraged with reason.

@h When you think about it, IBM's System/360 was just like the Amiga when it was first introduced: a shared, single address space environment. They had 2KB quasi-pages which prevented one task from writing into another task's memory, but *nothing* stopped tasks from *reading* other tasks memory. Today, z/OS is fully memory protected.

So, somehow, there must be a way to evolve an AmigaOS-like environment without breaking compatibility.

@h I've been putting some thought into this, and I came up with an idea that I thought would perhaps work, even supporting multiple address spaces.

Legacy binaries would be loaded into a common region in each process' address space. This common region, like the kernel, would appear in every process; therefore, it behaves exactly like AmigaOS currently does.

New binaries would be loaded into process-private memory. To be able to use exec's messaging, >>

@h *IF* this works out, and I think it has a strong chance of doing so, this would completely preserve Exec's simplicity, message port semantics, it'd allow safe and non-safe binaries to interoperate, and it would allow an upgrade path for AmigaOS which, until now, has long been considered an impossible dream.

Since my current CPU lacks any MMU, the kernel would not support "safe" binaries. It'd be shared-memory, single address space, just like Kickstart 1.3.

*After* I build the MMU for it (OR, after I switch the CPU out for a Rocket core), then I can upgrade the kernel to add support for "safe" binaries and implement the new system calls needed to make communicable regions of memory.

Actually OSX has two different ways to pipe apps together, they're really handy and powerful but I'm always surprised by how little they get used. One is UI based, trying to be accessible to end users (Automator), and the other uses a scripting language (AppleScript).

Yeah it's always nice when you remember that it exists. "oh shit, why was I lazy and entered all these notes into the default Notes app, there's no export function so now I'm locked in 😟 oh wait.... time to check if there's AppleScript hooks! 😋"

yeah I've finally switched for good from iOS to Android, and while I've always used Linux in parallel to OSX, I feel like I'm almost ready to break up with the Mac, the hardware has been getting worse as well, so there aren't too many reasons left...

@mayel @charlag @antanicus @h @vertigo My feeling about technology at present is that there isn't any panacea and all systems have various levels of problems. "Security experts" (makes air quotes gesture) will often tell you that iOS is better because it has a secure enclave supposedly implemented in a better way than Android's full disk encryption. Even if that's true there are still other considerations, especially with different threat models.

My take is that Android is better, especially if you can use LineageOS and Fdroid. If you have that type of software setup then you're going to bypass a lot of the worst aspects of contemporary proprietary software.

@h I've seen many attempts at this interface, but none having any real success. You mention IFTTT; that's a pretty specialized tool, and first and foremost requires people to be aware of its existence, and second, your (sic) applications that you use it with have to be compatible enough to work well with it.

@vertigo It may not be IFTTT's fault that it's not part of systems software. Alternatively, it's possible that IFTTT's fault is that its business model requires global scale distribution via the www.In any case, it's not a problem intrinsic to the core design of IFTTT. I don't think it's the ideal model, but it offers a glimpse of things that could be possible on the desktop, but for some reason we're stuck with roughly with the same GUI we had in 1985, only with fancier colours.