I made the effort to format this in markdown, so feel free to enhance your reading experience.

As mentioned in >>1000224 I now made an attempt to gather ideas and put them into one post. I also added some of my own.

# Introduction

Since the introduction of the anti-meritocratic "Contributor Covenant" CoC, many seem to fear that this is the downfall of the Linux kernel, and there is a debate about a Linux replacement going on. There have been (direct and indirect) attempts to generate a "movement", or establish a group of people to develop a new operating system that is not plagued by a CoC, however, without much success as of yet.

Since many are not content with existing alternatives, and are in favour of a completely new "untainted" operating system, I have taken it upon myself to attempt to create a basis for collaboratively planning an OS.

# Modern design

A modern, proper operating system should think into the future and dare to prioritise making bold design decisions over compatibility with existing operating systems and software libraries. Although compatibility is great to immediately have, if it means adopting bad design choices from other operating systems, then there will be detriments in the long term.

Since we are moving away from Linux, I think it is an appropriate moment to break the cycle of bad design choices propagating themselves through platforms because of compatibility. As we do not need backwards compatibility in our OS (as there is no previous version), we have the opportunity to design a new operating system interface that fits current technology. We can take this opportunity to take a good look at the problems of previous platforms, and ensure that our platform does not have them.

This will be a major project, and take much time. The OS will most probably not be as fast as current OSs, as they are more mature and have had much time for optimisation. It will most probably not be able to run any games, but that is not important in my eyes: Games pressure the platform for efficient hardware drivers, which often leads to proprietary software (vendors' drivers).

We need to organise in some way to achieve anything with the project. We need to learn from the mistakes of the Linux project and prevent things like CoCs or corporate/government infiltration.

Therefore, I propose the following guidelines (which are OPEN FOR DISCUSSION):

## Project

1. All code written should be licensed as free software, preferably GPL3, to prevent *Embrace, Extend, Extinguish*, Pivoisation, and the like. I am not sure if AGPL3 has any merits over GPL3 if used for an operating system. If yes, then I prefer using AGPL3. Even though the version 3 protects against rescinding granted licenses, which is crucial for destroying Linux after the CoC, since we prepare against CoCs beforehand, there should be no need for not using version 3.

2. The OS should be architecture-aware, but provide a well-chosen amount of abstraction. This is an important part: To specify a well thought-out API that is modern and reasonable. With the goal in mind to create a lasting platform, we should take our time to carefully consider how the OS should be designed.

## Anonymity

1. Contributors should use aliases, not only to prevent attention whoring, but also to protect against surveillance: There cannot be any black mailing if nobody knows who is contributing.

We should organise in a decentralised manner, to avert attacks. I propose the organisation into feature forks, where a feature fork is a fork from an early stage of the project without any other features. To get the whole project code in one place, multiple feature forks can be merged. This is tedious, but highly resilient to centralised control.

1. To prevent centralisation of the codebase into a single repository (see Linux), I recommend that there should be many forks of the platform: After a certain base point, separate features should be worked on in separate forks, which then have to be merged to create a whole. This means that the whole project will be split into sub-projects. This may seem silly, but isn't: The current Linux spectacle has shown that having a monolithic main repository will lead to potential abuse by admins (refusing pull requests by blacklisted contributors etc.). In my proposed system, there is no main repository, every part of the system can be maintained by everyone separately. Of course, this will create an overhead when trying to find the latest forks for every feature, but, as I see it, this is the only way to prevent a dangerous centralisation of administrative power that could be used to destroy the project (again, see Linux). If we manage to master this form of organisation, we will have a decentralised, robust developer community that cannot be controlled in any way (short of personal threats).

2. Ensure that all feature forks are as early into the history as is reasonable. If features share as few ancestor commits as possible, it is easy to revert changes, and the code is kept decentralised. Imagine forking a 3D accelerator feature fork to create an audio driver feature fork. This would make the audio driver only usable in conjunction with the 3D accelerator, which bloats the system unnecessarily. Feature forks should not fork from features they do not rely on.

3. Prioritise a simple working set of features over fancy features that are not crucial. The earlier the platform can be used to actually develop applications on, the earlier we can enjoy it and notice design choices to be reconsidered. Fancy features such as 3D acceleration are not important early on and should be worked on later.

Whatever, anon. I did not try to, nor did I intend to, get ALL of /tech/ to be OSDevs. And if you put in effort, it is not impossible to do, even on your own. However, I am convinced that a collaboration of a group of developers can produce higher quality code and architectures.

And don't meme your way out of responsibility or strawman me with some simplistic opinion. In the end it is everyone's responsibility to create a lasting, uncucked OS that is not controlled by trannies and megacorporations.

>Since the introduction of the anti-meritocratic "Contributor Covenant" CoC

By trying to be neutral and not standing without any shame for the truth (that equality is a false god, trannies/faggots are mentally ill and the only remedy that can restore their almost inexistent dignity is euthanasia and a lot more not-feels-good trivia), you've already fallen for their trap.

I don't give a shit about the mental health of other people, I have nothing to do with them. That is why I don't care whether they should be eliminated or not. I also don't think that this hatred against trannies and other mentally ill people is what should be the glue of our project, but the authentic desire to have an OS that is not controlled by big corporations or subject to someone's politics and mind games.

I'd like to see a truly modular kernel, composed only of a "hypervisor" that just loads and hotswaps the kernel in case it needs an update, a "system" or framework which just provides communication to the different modules, and modules which provide the actual functions and have standarized API. You could say this is an exokernel, but I wouldn't be against at optionally running it all in kernel mode with a compile time flag.

I just want to see an OS where I can easily fuck any kernel module's shit up by removing a single reference to i. in a config file, then replace it with an equivalent one which uses the same API.

Remember LibreBoot? Trannies are ticking timebombs, eventually something insignificant is going to make them snap and they will take down the entire project along the way. Humans work as a whole, and if one part is rotten it will spoil the entire person. Just look at furries, they are not content with drawing their shitty fetish art, they have to spread it everywhere and ruing any place they go to.

This is very similar to what I was thinking: Although my terminology is a bit sloppy, these modules would be the feature forks I spoke of. So to say, you have a basic minimal kernel, and anything that is not completely essential for the module system is then separately developed. I also sincerely hope that there will be multiple versions for every component, and a healthy sense of competition.

The problem with standardisation is, there needs to be a consensus, but since we are anons here, it is hard to get a real consensus, so it might happen that there are slightly differing interfaces (+/- some functionality). So this might not be easy to achieve, but I guess it will be possible to get at least an almost-consensus on what an interface should look like.

No, I do not know what happened there, but I can imagine some scenarios.

I think that since there is only a decentralised organisation and no single rulemaker, no set of participants can fuck everyone else over, especially if we develop under GPL3 with no-revocation clause. Even if at some point trannies get into the project (we would not immediately know since they are anonymous), there is nothing they can do to the rest except fork the project and ruin their own forks.

> why not use a different kernel instead of an entire OS? Hurd is still waiting to be finished.

I did not look into Hurd yet, but I do not know whether they are truly modern, i.e., did they just implement POSIX-compliance or did they think up a new standard / interface that is up to date for the next decades?

I know we jerk off to how terrible the CoC is here, but you cannot make an new OS where the only feature is that it doesn't have a CoC. Linux will always win purely because it has more existing software.

>I made the effort to format this in markdown, so feel free to enhance your reading experience.

8ch has it's own formatting. You can see it if you click on [options] up at the top, then "customize formatting".

In general, I suggest you lurk moar, and recognize that the only way projects get done around here is by having one sperg do all the work, and have everyone else tell him what features to have. You can either be that sperg, or give up now.

The problem here is linux has made itself a monolithic kernel, it's built in most of the driver support needed to run most hardware, this comes in the form of modules that directly target linux, this is both a great blessing and a curse as other sort of systems become a pain to port to.

GNU hurd is by far the closest contender to main running linux there is, simply because there is an on going effort to both port and use a wrap layer for these modules.

I'm unfamilar with BSD but i assume it's more or less the same.

The thing about GNU hurd is that it uses a micro kernel, this way of doing things is quite slow so no one really wants to use hurd.

bsd is... well... bsd.

Personally, a system i'd like to see get more support would be plan 9 (or 9front) It's from the same people who orignally made unix so it's pretty fair system wise, sadly it's cursed with suckless faggots.

>The thing about GNU hurd is that it uses a micro kernel, this way of doing things is quite slow so no one really wants to use hurd.

You can still have a monolithic kernel consisting of many modules that are chosen at compile time. If everything is designed to be as modular as possible, that makes the code easy to maintain and replace. Since one does not necessarily need to replace kernel modules at run-time, one can have the many modules statically linked. However, the architecture of the system should make replacing modules (at compile time) easy, without having to recompile the whole kernel.

> plan 9

I will have a look at it.

I also think that a capability-based approach to permission management is promising, instead of just sudo/non-sudo, there should be clearly defined capabilities to lock/unlock. If every program needs to request permission for potentially harmful capabilities, then the user is protected much better against malicious software.

I just had an idea: I think we could have the object files for every kernel module as part of the system, so that when a module is exchanged, the kernel is recreated by linking the object files with a new module, and replacing the old kernel library with the fresh one. However, I think that such a replacement would be non-trivial to do without rebooting the system. I guess we would need to have every module implement event handlers for the case where a module is removed / added / exchanged.

Fair enough. I did not expect /tech/ to be swarming with people begging me to participate in the project. However, I will start the project regardless, and I guess that somewhere along the way, someone will join eventually.

>I also think that a capability-based approach to permission management is promising, instead of just sudo/non-sudo, there should be clearly defined capabilities to lock/unlock. If every program needs to request permission for potentially harmful capabilities, then the user is protected much better against malicious software.

No. This problem is already well solved using group permissions (cf the audio, video, usb, etc... groups), it just needs to be extended to stuff that currently need capabilities; which means more stuff under /dev.

The main thing to do is solve the lack of interchange format for UNIX tools.

1) Tabular data: choose at two characters for FS and RS (hint: ASCII RS and US are already here for that purpose) and forbid them in filenames. A more reasonable approach would be FS=\t and RS=\n. Now, all tools must follows the RS and RS env variables and some --rs and --fs options to read and write their stuff.

2) Long options: they're needed so we can have consistent option names without single letter conflicts. In fact, ban short options; alias are here for shortening your command lines.

4) Unfuck Perl: reminder that it was supposed to be a more complete AWK+sed+sh, and nothing more (this is already a lot; I suggest you remove the sh part).

5) Unfuck sh: doing 1) will make the work almost non existent; stuff like associative arrays (thus no need to IFS split variables as a hack to emulate arrays; see zsh) and maybe threads (having to use FIFOs to communicate between shells and subshells is painful) would be good. Make it typed, too (just keep the types simple: int, float, string, array should be enough). Don't do an horror like rc, sh has a good syntax.

6) Unfuck C, of course, but keep it simple, so writing a compiler doesn't become too complex.

7) Unfuck signals; no idea how to do that, but there's something to do.

8) Steal the good shit from plan9 (maybe even use plan9 as a base) like bind, 9p or the POSIX compat layer.

9) Fix UTF-8 by having 1 codepoint == 1 grapheme, 4 bytes should be enough for this shit.

I was actually thinking about C or C++ without the advanced features that fuck everything up. I personally think that Rust is a meme language, but if you can convince me, then why not?

I think it is important to chose the language such that modules can be written in multiple languages and still work together. So it would have to be something that is C-compatible (I think Rust should be C-compatible though).

But before programming, we need to set some major design goals. Are you experienced / knowledgeable / skilled in OS design?

>Do you want concurrency? If yes, what kind of? Processes or something else?

I think concurrency is important, but I also think that one should have strong control over it (i.e., not have 2 threads on the same core). That said, I am in favour of processes, threads, and userland threads. With "or something else?", I assume you were referring to processes vs threads?

> If processes, what kind of IPC do you want?

I am not sure, I think that shared memory would be great for efficiency, but I would probably implement a basic message passing API as well as a shared memory API, since shared memory is more tricky to use.

<desire to have an OS that is not controlled by big corporations or subject to someone's politics and mind

Nigger I repeat: WTF is your goal here in starting from scratch? You could fork that shit and reap the benefits of well-worn code. You have to present some pragmatism which counterbalances that, or its just a shitty research project.

>A modern, proper operating system should think into the future and dare to prioritise making bold design decisions over compatibility with existing operating systems and software libraries. Although compatibility is great to immediately have, if it means adopting bad design choices from other operating systems, then there will be detriments in the long term.

Nope, not that many. A few fringe elements on imageboards, and people like MikeeUSA. The vast majority of significant contributors to, and users of, the Linux kernel are utterly unconcerned by the CoC. Or if they care, they're not making it known, and they certainly don't care enough to build a new OS from scratch. I'm against the CoC in any form, especially the Contributor Covenant, but you're mischaracterizing the situation.

>think into the future

>dare

>bold

Meaningless marketing drivel. Are you sure you don't just want to go work for Apple? They love that kind of talk there.

>We need to organise in some way to achieve anything with the project.

No shit.

>prevent things like CoCs or corporate/government infiltration

A project is either insignificant enough not to draw the attention of glowdarks, or significant enough to draw the attention of glowdarks. If the former, infiltration is not a concern. If the latter, you can't stop it. Good luck finding people who can not only program an OS but have the tradecraft to thwart glowdarks.

>All code written should be licensed as free software,

OK.

>preferably GPL3,

Pig disgusting.

>The OS should be architecture-aware, but provide a well-chosen amount of abstraction.

wut

>Contributors should use aliases, not only to prevent attention whoring,

Attention whoring is the name of the game in open source/free software. Reputational benefits are one of the few motivations that people have for contributing to this kind of software. Now you've narrowed your pool of potential developers further: OS developers with impeccable tradecraft who want absolutely no credit for their contributions.

>but also to protect against surveillance

If a well-resourced intelligence agency wants to know who the contributors to your project are, they will almost certainly be able to find out.

>I propose the organisation into feature forks, where a feature fork is a fork from an early stage of the project without any other features. To get the whole project code in one place, multiple feature forks can be merged.

Software development as Rube Goldberg machine. What a clusterfuck.

>To prevent centralisation of the codebase into a single repository (see Linux) etc.

You're confusing a social problem for a technological problem, and proposing a technological solution to it. The fact that Linux is in a "monolithic repository" is irrelevant. Under your scheme, if there are 4 OS devs who each maintain part of the OS, the minute that 2 devs disagree with the other 2 about the direction that one of the components should take, you have a fork on your hands. The minute that 3 devs disagree with the remaining dev about his chunk of the OS being official, the remaining dev's repo is just a few unofficial OS features: the 3 will take the last version of the remaining dev's code, assign it to someone else, and christen it the official version. The issue isn't one of centralized repository infrastructure, but centralized consensus. You're not going to solve that by splitting up the OS into "feature forks."

Come back with a few thousand lines of working code, and a plan that's less vague than "a bold, fresh vision for the future."

Sounds somewhat Unix-inspired so far. What about files? Or any other kind of shared namespaces of IPC objects. May I suggest a system where processes can create their own namespaces and pass them down to child processes?

He isn't giving recommendations for the kernel, but the OS consists of every program that comes with the system, and all the ways they can interoperate. An OS consists of a text editor, a shell, a terminal, a file system, a file manager, a web browser, and any other programs that will be useful to most users. The implication is that every single argument that has ever been had on /tech/, about the best editor, browser, language, font, etc, are all captured in a single OS. You therefore cannot reject any suggestion as "not part of the OS's tasks", because everything is part of the OS's tasks.

These are good goals, but there needs to be a middle ground. Forking for compatibility ties you to existing paradigms. But you can still fork to get the old code base. Linux has drivers for hundreds of different peripherals. You could fork linux, rip out any syscalls you don't like, but keep the old drivers.

You cannot build a clean, modern operating system with clean, modern hardware. The reality is that for now we are stuck with 1980s CPUs like x86 and POWER. RISC-V is new, but it still uses the old ring-style protection. Mill seems like an interesting architecture, but it will take many, many years before they will have hardware for sale. Targeting multiple architectures is code word for targeting old architectures. That implies creating an OS that is not modern. If you really want to build a new OS, good luck, but that's like expecting a democratic solution to our current demographic problems. It's just not going to happen.

If you want to make computing great again, start with something more simple. Sadly, most of the big problems we are dealing seem to be social problems rather than technological problems.

The old way that unix manages things is using the idiom "everything is a file". This clearly doesn't work in the modern age, so it should be replaced: everything is a web server. When you want to edit a text file, you wouldn't execute the file containing your text editor, passing the file containing your text. Instead you would visit the website of your text editor, and give it the url of your text file to edit. Rather than complex and unnecessary windowing system, you would have a simple web browser with a number of tabs. Rather than having to argue endlessly about programming languages and graphics toolkits, ui would be coded with html+css+javascript. Rather than having hundreds of different systems for ipc (signals, pipes, fifos, sockets, etc), communication between servers would be done with good old fashioned get and post requests. The shell would be superseded by the search engine.

Some people might be butthurt that the familiar unix idioms will be dead as dodos. What you have to realize is that they're already dying; most applications are already written as I've described. It's time to finish the job, and move on to the future.

I approve. The web server should be Hyper. It is written in Rust so it can fearlessly utilize concurrency, it is memory safe, and it is blazingly fast. For the web browser the obvious choice is Firefox. But maybe we should use Servo until WebRender is fully integrated into Firefox.

but it doesn't list criticism. It sounds pretty standard as far as RPC goes. HTTP as RPC isn't my proposal though - it is used across the industry. Look up REST protocol for discussion of one of the common ways it's implemented.

Make modules or interfaces have a "role". This is what they do, regardless of how they do it. Interfaces which have a specific role have to comply with a "tier 0" API, which is the bare minimum a module has to do to be considered something part of that role. Everything else is a "diversion" of the API. If it is properly documented and it is sound, it can be included in the "tier 1" API, which basically means "you should support this unless you are looking to make your program an embedded systems exclusive". Anything else is part of the many "tier 2" API, officially documented, but not part of the standard.

Obviously, this should be transparent to application developers. Let all this be handled by the equivalent of the lang's stdli,, another abstraction layer upon which stdlib can be built, or whatever.

Sure you can store stuff in cookies and localstorage, but the server won't remember you.

Especially REST is a bad example. It is used to transfer state (hence the name), but keeping track of it is entirely done on the client side. REST calls are supposed to be idempotent. REST is made for CRUD. This means you can't use REST to start a server-side process that may have side effects on other data on the server. No such thing as "run function x() and give me the result".

RPC may be better suited for calling procedures, but as the very concept of http is that every call is isolated in itself, I'm wondering why you'd bother with http anyway.

Is this what you think HTTP is? You may be confused. The website your shitposting on is using HTTP, and yet I can see the posts you make, and you can see mine. Generally the architecture is this: the client asks the server for a token. The server complies. The client then makes future requests with the token. The server then modifies the state associated with the token.

I would actually like to decouple files from pipes etc. Why would a pipe be accessible over the file system? IMO, the file system should only concern itself with storing and accessing files. Although a pipe can easily be implemented to be accessible from the file interface, I think that this is a dirty hack and that if someone wants to read a file, they should only be able to open actual files with their command. The same goes for devices, they are not files and should not act as such.

But using the namespace approach for pipes and files etc seems nice, even though I misunderstood it at first.

That is a very nice idea, I like it. The standard lib would then have tier 0 and 1 API support, but tier 1 functions have to be checked for availability before calling. Tier 2 functions have to be imported, as they can be too diverse to standardise.

It's not like I have any other missions in life, and I have no GF, so no responsibilities to other people (except work lol).

So I don't mind if it is difficult.

>Besides we could be experienced in C programming, this is totally a whole new level.

What do you mean to say with this?

You mean because of the anon that proposed using Rust?

I think as long as every single module has a C-style API, it does not matter what a module is written in.

>Therefore, I suggest forking an old version of Linux, adapt it to the new design and then proceed.

I will try to take as much as reasonably possible without interfering with my design goals. Especially the module specification thing that >>1000557 proposed will probably conflict with some of the linux code base, making parts of it unusable.

> As a suggestion, consider looking at plan 9 design.

I read through the Wikipedia article on it, but in the next few days, I will look deeper into multiple platform designs, and of course also into POSIX to determine usable parts.

>Why would a pipe be accessible over the file system? IMO, the file system should only concern itself with storing and accessing files.

Often a program uses files in place of pipes. I write to a file, and you read from it later. One version of this I've seen are PID files: A daemon writes it's pid to a file when it starts, you can read from it later to know which pid to signal. You also have named pipes, which behave exactly like pipes, but are given a location on the filesystem. The neat thing about these is that the program doesn't need to know it is accessing a pipe. Consider this code for locking the screen:

Note how the code required no change in order to turn $IMAGE into a fifo. ffmpeg and i3lock don't know and don't care that they're operating on a pipe not an real file.

In general I think it should be true that a program should not concern itself with how a file is being stored. If it is on one hard drive or another, or on some remote server, or in memory, or being fed to it from another process, is not it's concern.

With that said, the modern approach, used by most linux apps nowadays, is to use urls and not file paths. This allows you to more easily treat remote and local resources the same, without having to mount a FUSE for every server you want to access. In order to make this work, you would need to have a common interface for every protocol, so that you can add new protocols without having to update every app to support it.

so your idea is basically replace UNIX pipes with URLs and packets if i read it correctly (t. brainlet). it sounds interesting actually, and would provide a decentralized IPC solution.

we could access devices like that (although this might be too UNIX for your taste):

sysping --data=0x1 syscall://dev.sd.0 or

sys://dev/sd/0

sysping: sends bytes using system ipc packets??

and the UI would be a JS/HTML interpreter, an efficient, lightweight one (read: NOT electron) and the core set of UI libs would be a JS framework, perhaps we could add C binding headers that let us send draw calls to the JS UI manager and HTML/CSS data, so we could get rid of GnomeTK+ and Qbloat. if implemented correctly would potentially be something like GTK, Qt and Electron combined and done right without the 100MB of RAM usage.

I'm doing my own thing mainly for my own amusement. The only major design goal as of yet is to leverage ipc\ole mechanisms to keep code complexity down and keep things modular.

As an example, If I were to write a browser for it (which I wont, because it would take more time than the os+userland+compiler combined), the protocol related stuff would be an entirely separate module. My implementation of wget or curl would be little more than a shell script that talks to the http service.

How crazy would it be to implement the OSI model like it was meant to be done at an OS (userland, of course) level? I would actually fucking love to see my programs all using the same XML parser, the same encryption library, etc, all "transparent" to the developer, so the user can always have a final say on how to route/pipe a given request.

Why consider anything other than a native widget system? Under the hood it doesn't need to be much more than some primitive drawing functions, so what I'd be looking to do is finish\rewrite a prototype I shit out a few years ago in love2d. Main things I had left to do before I got bored and moved on were proper padding, margins, and a couple builtins for positioning. Or I'll just shit out another one, guis are literally the easiest thing to make.

A native widget system presupposes that most code will be designed specifically for your system. In fact, you expect most code to be written generically for any system. So your native widget system will be primarily used through a toolkit. Once you realize this, it becomes obvious that you should just implement the toolkit directly. Whatever toolkit you choose will probably be much more mature, and so have support for a variety of widgets, themes, language support, etc.

RISC-V does not use rings. Multics rings allow different privileges to be associated with different procedures running in a process, which neither UNIX nor Windows do. Most UNIX "innovations" are actually undoing real solutions to problems, like claiming that getting rid of toilets and shitting your pants is a solution to clogged toilets.

>Thus, a call by a user procedure to a protected subsystem (including the the supervisor) is identical to a call to a companion user procedure.

>The characterization of rings as a restricted implementation of domains is the result of hindsight. When developed, rings were viewed as a natural generalization of the supervisor/user modes that provided protection in many computers.

>The old way that unix manages things is using the idiom "everything is a file".

That's actually marketing bullshit created as a reaction to "everything as an object" in languages like Smalltalk and Common Lisp. In these languages, integers, lists, arrays, strings, structures, classes, functions, packages, and all other data are objects. In UNIX, none of those things are files.

Yesterday Rob Pike from Bell Labs gave a talk on the latest and greatest successor to unix, called Plan 9. Basically he described ITS's mechanism for using file channels to control resources as if it were the greatest new idea since the wheel.

Amazing, wasn't it? They've even reinvented the JOB device.In another couple of years I expect they will discover theneed for PCLSRing (there were already hints of this in histalk yesterday).

I suppose we could try explaining this to them now, butthey'll only look at us cross-eyed and sputter somethingabout how complex and inelegant that would be. And thenwe'd really lose it when they come back and tell us how theyinvented this really simple and elegant new thing...

It's stronger than just handy. It is the necessary level of abstraction. It is not the responsibility of the tool to keep track of where my files are coming from, and change their behaviour to match. I don't know how you use the word "unclean", but a system that forces a separation between files on disk and files in memory creates unnecessary work for all parties to no visible advantage, which I see as a very unclean thing to do.

It always pissed me off that send/recv aren't just write/read. I suppose one difference is that a socket is two sided, whereas file descriptors are normally unidirectional. But this distinction doesn't seem sufficient to justify two separate io apis.

> It's stronger than just handy. It is the necessary level of abstraction. It is not the responsibility of the tool to keep track of where my files are coming from, and change their behaviour to match. I don't know how you use the word "unclean", but a system that forces a separation between files on disk and files in memory creates unnecessary work for all parties to no visible advantage, which I see as a very unclean thing to do.

> It always pissed me off that send/recv aren't just write/read. I suppose one difference is that a socket is two sided, whereas file descriptors are normally unidirectional. But this distinction doesn't seem sufficient to justify two separate IO APIs.

I think that only files should be treated as such: You can seek in a file, tell its size, append, delete, overwrite. You can do no such thing with a socket or pipe. What would be more sensible is to provide a stream abstraction, where you can use sockets, files, microphones, etc., with an API that allows to read/write bytes (even then, there needs to be a distinction between read-only, read-write, and write-only streams). If you want to meme it the UNIX way, then use the stream API for everything. But if you want to have powerful file manipulation primitives (mainly seek, tell), or socket primitives (shutdown, etc.), then you need to access an API that is designed only for files (or sockets, respectively).

I am a big fan of type-safety, and *everything is a file* just doesn't fit right with me.

Also, why would you use filesystem locations to locate a socket or keyboard? It is a fucking filesystem, there to organise data on your permanent storage into named FILES and DIRECTORIES.

Also, in the UNIX file api, not every file operation can be applied to every file, which is why I say it's not clean. Rather than that, I will make a FILE,SOCKET,KEYBOARD,PIPE < IN/OUT-STREAMABLE hierarchy. If something fits into the FILE concept, then all file operations can be applied to it. If something is a SOCKET, then all socket operations can be applied to it. And so on. The UNIX equivalent of a file would then be IN/OUT-STREAMABLE, which applies to almost everything. I would also make a distinction between filesystem locations and pipe names, device names, etc. You could, for example, create a streamable handle to a file via file(location) or file://location, and devices via dev://name.

> a system that forces a separation between files on disk and files in memory creates unnecessary work for all parties to no visible advantage, which I see as a very unclean thing to do.

I never did say, or intend to say, that files in RAM should be treated differently from files on a HDD or SSD or other storage medium. I did say that the API for files should only be applicable to files.

A direct consequence of this is that URLs will be the default way to pass resources, which will also make it easier to pass remote resources, as the programmer would not try to open everything with the file API, but with the streamable API, which would then detect remote URLs and other things, and handle them accordingly to the specified protocol.

I have taken many synchronisation primitives, as I am convinced that they should be supported for simpler cooperation amongst processes. I.e., if you want to synchronise execution, and need a barrier, then on UNIX, you would have to do some strange IPC, and even simulate the barrier in one master process.

Everything having a web address is what Alan Kay said a modern Smalltalk machine should have. Every object would have a network address. Is OP taking his ideas and merging them with ((( rust )))? Don't use rust either unless you want AIDS.

To be precise, I think that resources such as locks, if owned by processes, should be located in that process (i.e., barrier://<pid>/<id>). Only file:// and ftp etc. URLs would actually correspond to a location in the network or in a file system.

Well, in theory, one could do that, but only with a well thought-out access control scheme. But as long as one is authenticated, I don't see why it should not be possible to interact with other machines like the local machine.

it's easy to make an OS faster than any current bloated piece of shit. i've been working on an OS for 10 years which is memory-safe, has a single PL with small amounts of assembly to boostrap, and is slow and secure and has no 3D graphics.

>it's easy to make an OS faster than any current bloated piece of shit.

I don't think that speed is everything. I believe that the system should be programmer-friendly, and that you should be able to use it to easily program user-friendly programs.

I don't need it to outcompete in terms of speed or games or whatever, I just want a platform that truly respects freedom, is not subject to corporate interests and tyranny, and allows me to comfortably program it. As programming is evolving, so should the OS.

RISC-V is based on the PDP-11's supervisor/user modes with extra hypervisor and "machine" modes. The 286 and 386 protected modes were inspired by Multics. Instead of a kernel, Multics has code segments that run in certain rings. Running in a ring gives access to other segments in the same or outer rings. In both Multics and x86, each ring has its own stack segment. This goes beyond microkernels and also solves the PCLSRing problem and these other bullshit UNIX problems in a very simple way.

>Multics rings are a RISC-V thing, and they work different from x86 rings.

Multics rings and call gates are exactly like x86 rings and call gates. On x86 and Multics, a single process enters multiple rings depending on the code that is executed. The only reason call gates aren't widely used is because they're not portable to RISCs and other worse hardware.

>And as most other architectures do not support call gates, their use was rare even before these new instructions as software interrupts/traps were preferred for portability.

>Call gates are more flexible than the SYSENTER/SYSEXIT and SYSCALL/SYSRET instructions since unlike the latter two, call gates allow for changing from an arbitrary privilege level to an arbitrary (albeit higher or equal) privilege level. The fast SYS* instruction only allow control transfers from ring 3 to 0 and vice versa. Upon comparing call gates to interrupts, call gates are significantly faster.

>But if you want to have powerful file manipulation primitives (mainly seek, tell)

UNIX files suck so much that seek and tell are considered "powerful file manipulation primitives" but most mainframe OSes have keyed and random access files. Files on these OSes are designed for random-access disks, not tape drives. Even "seek" and "rewind" are tape drive bullshit that slow down your computer and clog up your brain preventing you from understanding what your computer can really do. A disk lets you access multiple parts of a file without having to read everything in between and an SSD is even better at this. A consequence of the UNIX way is file formats like XML that are designed to be read one character at a time.

>Also, in the UNIX file api, not every file operation can be applied to every file, which is why I say it's not clean. Rather than that, I will make a FILE,SOCKET,KEYBOARD,PIPE < IN/OUT-STREAMABLE hierarchy. If something fits into the FILE concept, then all file operations can be applied to it. If something is a SOCKET, then all socket operations can be applied to it. And so on.

That's why OOP and inheritance are good. "Everything is an object" also means files are objects, but "everything is a file" means UNIX weenies don't know what "everything" means.

I don't regard it a "real" UNIX, then again I wouldn't buy a"real" UNIX, 1970s software technology is not something Iwould want to buy today.

Getting caught up in the "pure" UNIX war will lead you torestrict yourself to "pure" SVR4 implementations, in themainstream camp *only* SUN have gone for this. That in myview does not make it much of a "standard".

If a vendor decides to do something about the crassinadequacies of UNIX we should give them three cheers, notstart a flame war about how the DIRECTORY command *must*forever and ever be called ls because that is what the greattin pot Gods who wrote UNIX thought was a nice, clear namefor it.

The most threatening thing I see in computing today is the"we have found the answer, all heretics will perish"attitude. I have an awful lot of experience in computing, Ihave used six or seven operating systems and I have evenwritten one. UNIX in my view is an abomination, it hasserious difficulties, these could have been fixed quiteeasily, but I now realize nobody ever will.

At the moment I use a VMS box, I do so because I find that Ido not spend my time having to think in the "UNIX" mentalitythat centers around kludges. I do not have to tolerate ahelp system that begins its insults of the user by beinginvoked with "man".

Apollo in my view were the only UNIX vendor to realize thatthey had to put work into the basic operating system. Theyhad ACLs, shared libraries and many other essential featuresfive years ago.

What I find disgusting about UNIX is that it has *never*grown any operating system extensions of its own, all thecreative work is derived from VMS, Multics and theoperating systems it killed.

>your spouting bullshit now aren't you? one of the hard won lessons of the decades long OOP experiment is that inheritance is one of the shittiest ways to compose objects.

What the hell? Nobody said anything about composing any objects via inheritance. This is about basic inheritance of concepts. A dog IS AN animal, this kind of inheritance. A file IS A streamable object. That has nothing to do with how I actually implement files, it merely states that on an abstract level, something is a generalisation of something else.

>This is about basic inheritance of concepts. A dog IS AN animal, this kind of inheritance. A file IS A streamable object.

What you're describing is just Dispatch based on Type. The inheritance you describe is simply an abstraction over that concept. What's worse, you ONLY can dispatch on type. Any additional dispatching you would like based on the data has to come from creating a brand new type to dispatch on.

He's calling you a university fellator specifically because you bring up Inheritance (along with a literal university example) without understanding WHY you are applying inheretence outside of the OOP teaching material. Because of this, you won't understand the inherent limitations of Java/C++ style OOP design. If you want a better understanding of what Objects bring to the table, go look up Smalltalk and it's Object System since it taught Common Lisp a few tricks. Erlang is another logical extension of what Smalltalk brought to the table. You'll notice that Inheritance is missing. Your homework is to find out why.

> The inheritance you describe is simply an abstraction over that concept.

That is exactly what inheritance in the scientific sense means. I am not talking about the feature called "inheritance" in C++/Java. Those make additional assumptions to make composing objects easier. I am also not talking about making everything an object. The thing I intend to do is to create an API that correctly represents abstractions and concept hierarchies, which is actually easier to understand for newcomers than to have axioms like "everything is a file, even if it is actually something else".

> without understanding WHY you are applying inheretence outside of the OOP teaching material

I think you do not understand that inheritance extends outside of OOP, and designing an OS API is still programming, so it is technically still inside the field of programming, and therefore OOP techniques apply.

In Multics, each process has a separate stack for each ring. Multics uses this to its advantage to handle faults when running in ring 0. Faults occurring in other rings are simpler and you can retry or continue the operation or GOTO somewhere else.

>Crawlout occurs when ring 0 encounters a fault on the call side. The supervisor attempts to clean up, by executing any cleanup ON-units, and then:

>checks if any ring-0 databases or directories are locked, salvages and unlocks them

>abandons the inner ring stack

>pushes a new frame on the top of the outer ring stack ("caps the stack") and calls the ring's signalling mechanism

>when you get a file over the network, you can't arbitrarily seek on it.

Of course you can. FTP let you download a portion of a file for decades. So does HTTP.

>So seek and tell are powerful.

To a C weenie, finding the length of a string is powerful and takes a lot of cycles.

>If XML is too slow, choose a different format. A consequence of the unix way is that you can choose the file format that suits the problem.

A consequence of the UNIX way is that you have to rewrite millions of lines of code in all different programs and libraries if you want to change a file format.

>one of the hard won lessons of the decades long OOP experiment is that inheritance is one of the shittiest ways to compose objects.

The real lesson is that you should fix things that are broken instead of blaming the whole concept. OOP works but OOP in C++ is broken. The "diamond inheritance problem" is not a problem in other languages with multiple inheritance. It should really be called an "inheritance bug in C++" because that's what it is.

>What you're describing is just Dispatch based on Type. The inheritance you describe is simply an abstraction over that concept.

Inheritance provides additional consistency and guarantees compared to unrelated types. In that example, an ANIMAL has an age, so by saying a DOG is an ANIMAL, you guarantee that a DOG also has an age. Common Lisp also has a static type system in addition to the dynamic typing. Everything in Common Lisp has a type and T is the supertype of all types.

>Because of this, you won't understand the inherent limitations of Java/C++ style OOP design. If you want a better understanding of what Objects bring to the table, go look up Smalltalk and it's Object System since it taught Common Lisp a few tricks.

Common Lisp and Smalltalk are even more class-based than Java and C++ because everything has a class (including classes). What gives Smalltalk and Common Lisp less limitations is the ability to create and redefine classes at runtime and to change classes of objects.

>Erlang is another logical extension of what Smalltalk brought to the table. You'll notice that Inheritance is missing. Your homework is to find out why.

Erlang is dynamically typed, but organized completely differently from Smalltalk. It's not class-based or method-based.

The talk is mostly based on paper [1] augmented by ideasfrom [2]. Because of its C heritage, C++ is both weaklytyped and weakly structured. Because of the basic designdecisions that were made in its object-oriented extensions,I claim that C++ is also weakly object-oriented. I willdiscuss several aspects and consequences of what I call theFundamental Defect: that objects do not carry inambiguoustype information at run time, in contrast to almost allother OO languages. The undisciplined handling of pointers(as in C) makes the problems worse. I will also mentionsome interesting problems of multiple inheritance, mostlypertaining to the distinction between "virtual" and"non-virtual" base classes (superclasses). Several otherfeatures and their problems will be mentioned, such as:reference types and argument passing, nested classes,storage classes and garbage collection, overloading,assignment and copying, templates (genericity) andexceptions.

The worst disadvantage of C++ at the moment is political:accepting C++ as the standard OO language de facto tends tokill other existing languages, and stifle the development ofa new generation of essentially better OO languages.

Ample time will be left for questions and discussion afterthe lecture. That allows us to look at some details thatreally interest the audience. Also, many of my opinions arecontroversial, and I do not expect all listeners to acceptthem quietly.

>>when you get a file over the network, you can't arbitrarily seek on it.

>Of course you can. FTP let you download a

portion of a file for decades. So does HTTP.

HTTP only when the server supports it. In general, it requires protocol support, which is not given. The salient point is that you can sometimes seek, and you sometimes can't.

>A consequence of the UNIX way is that you have to rewrite millions of lines of code in all different programs and libraries if you want to change a file format.

This is what libraries and dynamic linking is for. The real consequence is that a serialization lib doesn't have to be a centrally agreed upon standard like file systems are, and can instead be an organically grown de facto standard.

You guys are too ambitious. You need to abandon the entire concept of the "general purpose operating system" and go for the embedded/console model. Pick your thing and target just that, and later on expand it and allow additional modules. Fire up a basic HTTP browser on an ARM CPU and connect to the internet based on 100% homegrown open source code. Slip in anime videos later.

>A modern, proper operating system should think into the future and dare to prioritise making bold design decisions over compatibility with existing operating systems and software libraries.

That vague phrasing sounds like it came from the ivory tower, and the fact that you're going to /tech/ to get any serious work done shows you are absolutely out-of-touch with this image board and what it is capable of.

People come here to shitpost and feed their ego. Drawfags might make you a mascot for your new OS, but outside of that you're going to be doing all the work. I strongly suspect anyone here who's capable of helping you will wait to see whether your idealistic zeal will endure over even a month of having to do work for free.

Why don't you come back once you have more than vague notions of what you do or don't want to magically happen?

>the fact that you're going to /tech/ to get any serious work done shows you are absolutely out-of-touch with this image board and what it is capable of.

I frequent this thread, as from time to time, useful suggestions are given. I posted on this image board, because I feel comfortable here. Also, my inspiration for this project came from browsing here, so I thought, I should keep the project here.

>Drawfags might make you a mascot for your new OS

I will need to think of a non-niggerlicious name first. When I've done that, I will also upload my work. Until then, I will be working on my local machine only.

>you're going to be doing all the work

As stated previously in this thread, I am going to do this completely on my own, if nobody is willing to contribute. After all, since I want to use this operating system, and since it does not exist yet, it is entirely my responsibility to create it myself. Of course, together with others, it is easier to recognise bad design choices early.

>work for free

Since you pity me so much, just for you, I will put up a crypto donation address in the repo.

>Why don't you come back once you have more than vague notions of what you do or don't want to magically happen?

I am currently writing a rather abstract specification, containing any high-level design choices I come up with. It grows more specific by the day, but it is a slow and tedious process, as I have to always ask myself: "Is this really a good decision? Isn't there a better way to do this?". As stated in the OP, I do not just want to create an OS of my own, but a quality OS which is cleanly designed and modern.

OP, you sound like an 'Ideas' guy. Just this post alone is riddled with future-tense, which tells me that you haven't even started on anything that's worth working on. You need to stop thinking and start writing. Put some of these 'Ideas' into a formal spec and then post it. Stop soliciting opinions until you have something close to a rough draft of your spec.

Given your obsessiveness over banalities like CoCs and 'non-niggerlicious' names is the sign that this project has other motivations you aren't airing, and are utterly last on the list of concerns OS developers have.

How do I know?

>Naming is usually the last problem to be solved, even while we all wish for a cool name to our cool concept.

1. If you have no established codebase, people will not join because they can see you lack experience and expect the project to fail.2. If you lack a (worked out) design, people will not join you because they can't see how your OS is more interesting than their own design.3. If your reputation doesn't precede you, especially the more experienced people will be very wary of you and lack the trust to join.4. If you don't have project management skills, the few rare people that do join will quit shortly because they are discussing stuff and do not get to code.

If I'm quoting the Beginner Mistakes from the OSDev Wiki you need to (a) seriously reconsider what you are asking for, (b) get the fuck out, and (c) don't come back until you have something more substantial.

Everything looks like a file, but it actually is just a generic value. What tyoe of value it is is completely transparent to the user or developer at any time, but it needn't be known for it to be used. For example, let's say we go balls to the wall with the runtime transparency, to the point all programs in our OS appear in our own "/run", and can receive signals that way ideally, you would actually have a better way of communicating something to a program rather than by knowing its PID, beforehand, but let's leave that for later. We are running, say, a webserver, where we can and want to turn on the cacheing of pages right now with a command.

/run/1009/private/settings/cache < true

This looks pretty unixy, but the way it works under the hood is not exactly that way.

First, you are not accessing the filesystem at any given moment, as you are accessing the "system graph", which is a virtual tree-like structure which can offer representations of the actual file system, but doesn't have to. This more or less is just a rewording, since modern unixes offer a filesystem as virtual as they like, but in this case we are being honest about what's going on under the hood.

/, as usual, is the system graph root, let's call it Monad because I like that word, and because I want to be a special snowflake. All it is is the first node of a the tree which represents our system. It may have some special properties, but generally speaking, it is just a folder. A virtual/volatile folder, which means it is present in your memory (generally speaking, where it is physically located should not be important for anything else but optimization purposes, but the system is very transparent about it) and not in your filesystem. Volatile means this folder will be recreated at some point in the future (ie. on next boot) by the OS or another program, so anything placed on a volatile folder, even non-virtual stuff that may be actually present in disk, will be lost and possibly erased. In order to permanently store something, you have to put it inside an "anchored" folder, which is a folder with an actual representation in the filesystem. The Monad is the only volatile folder that can hold anchored folders.

run/ is yet another virtual volatile folder. It is where all processes are located and exposed for everyone to see. So is private/, which is where the program holds all the stuff the program wants to expose to people with permissions (public would obviously be what it exposes to the world, which, if designed correctly, could even be the website itself, but would usually be just a public API and non sensible read only values). All folders are in this OS are "transient nodes", which means they can contain other nodes inside them, but not values.

We get to the settings folder. While it is accessed just like a folder, it is actually not. The main difference is it is actually an object, or struct, or whatever you want to call it. This means all its contents are defined at its inception, and can't be modified by anyone, not even the superuser. This is, simply put, a table of named values, just like a struct in any statically typed programming language. We go with static typing, no matter the underlying language, because a program wouldn't be able to identify new values nor know what to do if a value was missing, even if they can modify the contents of structs at runtime (for example, because they may actually be hashtables and not structs).

Then, we get to the cache node. It is actually a terminal node, which would be a regular file in most filesystems. It contains a value, and that's it. However, in this case, this is not a file. It is a value, a boolean typed value to be specific, and a virtual/volatile boolean value, to be pedantic. Values can be read to and from a value file, but only whole. Composite values, like strings, are actually special folder-like values with special properties, which could be read from whole if accessed as a terminal value, or accessed in ranges as if it were a folder.

Finally, if the program has declared at some point to the OS (either in the executable, or during runtime) that it wants to be informed of changes to that value, the OS will send a signal to the program informing it from such a thing, so it can operate accordingly.

very nice post. I especially like the intuitive way of creating interfaces. The next obvious step is to create a shell language that can handle structs and typed values (possibly including an 'Any' type), and then, when accessing a property of a program, a value is then sent to it, possibly as a JSON-like representation. This is very much in line with the idea I had about sensible typing.

I think that using protocols names ("run://1009/…" instead of "/run/1009/…") is even cleaner, as they do not suggest the existence of a system tree. Firstly, I think that explicitly exposing everything as a tree that contains the file system tree confuses users, as it suggests that everything is in the file system. Second, having a protocol instead of a directory makes it more obvious what type of resource one is accessing.

connection://3583/send < "Hello world"

/connection/3583/send < "Hello world"

This is a nice example comparison: The type and the identifier are cleanly separated. I would also consider using a "." notation to access properties, which is more intuitive to programmers, somewhat like this:

run://1009/private.settings.cache = true

I also changed from "<" (which usually means read input from file) to "=", which makes it even more obvious that a value is assigned. Similar to C++ constructors, "a=b" would be equivalent to "a(b)", which has the advantage that functions taking multiple or no arguments can be exposed the same way as getters and setters. If the shell has full nested expression support, then it is easy to write C-like programs in the shell.

run://3583/add_connection("127.0.0.1", 8080)

or, more in line with current shell syntaxes:

run://3583/add_connection "127.0.0.1" 8080

Also note that I removed the "private" part, as anything that's accessible this way is explicitly exposed already.

Now, we have the possibility of exposing virtual programs to the rest of the system, and the "<" syntax would call that virtual program with the contents of a file. The same way, pipes could be used to feed a program output into a virtual program. However, I would very much prefer something like

run://1203/program $(some_program)

or, in C-style:

run://1203/program(some_program())

I am not sure whether C-style calls or shell-style calls are better, the major difference being that shell-style calls are easier to extend with additional arguments.

> ideally, you would actually have a better way of communicating something to a program rather than by knowing its PID, beforehand, but let's leave that for later

I think it can be done similar to connection ports: why not let processes choose some number under which they are accessible, and if it's taken already, then they exit. This needs to be optional though, since anonymous processes need to be possible too. There could also be string names, although they would have to be limited in length.

This will effectively make coroutine-like programming possible, where every program exposes a public interface for interactions, and programs can message / query each other without much effort. Of course, as message passing is not very efficient on most architectures, a shared-memory version of the protocol that runs in user space should be supported as well. Then, if a program wants to create a shared-memory channel to another program, then it only needs to query it via message passing, i.e.

run://some-program/shared_memory(this_pid,memory_size)

The queried program then uses an OS function to establish shared memory between the programs. This switches to poll-based messaging, but without any context switches, which is nice.

I refined the idea a little further: I abolished the rpc:// protocol, instead, I added the "->" RPC operator. In the shell language, using the "async" keyword allows a process to be run asynchronously, and the process ID is returned. This means that we can write:

$server = async ./server("localhost", 1337);

Now, we have asynchronously started a server, and the handle can further be used to make RPCs:

$server->cache(htdocs/index.html, "RAM");

This is a very intuitive notation. The application that is queried needs to set up its RPC interface, and then notify the system that it is set up. Only then are RPCs to that process processed, to remove race conditions.

RPCs themselves can also be made asynchronous via the "async" keyword, but I am not sure whether to allow an RPC to implement an RPC interface itself. This would make the language and RPCs more powerful, but at the same time, the system becomes more complicated. This way, RPCs could become like objects, with member functions of their own. The object (RPC) is stored in the process that it was produced by. I can imagine a use case: If I RPC a window manager to create a window, I could use the returned handle to control that window, i.e.:

Instead of pipes, allow programs to output an arbitrary data structure. You can emulate UNIX pipes by just outputting a character stream. Instead of having a program like awk, you have a program which can turn something from one data structure to another.

If the person writing the script has different idea of what the data structure is than the author of the program you can run into problems. This leads to developers not being able to change the output of their application because some script might depend on some weird formatting in the output.

You also run into the problem where you have to convert numbers into text and then back into numbers instead of just passing the numbers.

Data structures which pass multiple streams are going to require you to create some protocol to use over the pipe in signifying which sequence is being added to.

>Instead of pipes, allow programs to output an arbitrary data structure. You can emulate UNIX pipes by just outputting a character stream. Instead of having a program like awk, you have a program which can turn something from one data structure to another.

I think pipes are sufficient, what is needed is an OS-wide standard for formatting data structures, which I would probably make in raw form, and not as text, as serialising into readable text has too much overhead, and the next program is going to deserialise it again anyway. I would go for something like a raw JSON format.

What I do not like about UNIX pipes and processes is that they make it impossible to invoke a program like a function. Every program has 1 implicit input pipe and 2 output pipes, which makes it hard to get the actual value returned by a program (its useful output, such as a computation result), since it might be mixed with user input prompts. Even if all user input prompts are written to stderr, that just messes up the error logs.

I am not sure whether just increasing the amount of output pipes a program has solves the problem. There needs to be at least one pipe that only contains the result data, one for prompts, and then probably multiple pipes for different levels of logs, warnings and errors. The problem is that I need to make the pipes easily accessible in the shell. What I could do is use the "async" notation to access a process, and then via the acquired process handle, configure the output pipes, but that's also not very clean, as the configuration of the output pipes should be complete before the program starts executing.

The other problem is that since every program has an implicit input pipe, you cannot just write f(g(x)) to pass g(x)'s output to f. And the notation g(x) | f() is also shitty, since this only works for one argument. If I wanted two pipes for a program, I would have to use g(x) | f(h(x)), to pass g(x), h(x) to f. I am not sure yet whether the issue is purely notational or if the concept of implicit pipes is inherently flawed.

This notational difficulty is also what makes shell scripting different from programming. In proper programming languages, everything a function gets as inputs is written explicitly as parameters, and not as some implicit input that can also explicitly be modified. Maybe I should make the stdin pipe explicit, so all of a program's arguments are automatically pipes (or streams, which is essentially the same). If I then want a program to read user input, I need to explicitly pass the user input stream as a parameter. So, to call sha512sum with user input would be look like this:

sha512sum($IN)

Here, "$IN" is the keyword, or magic variable that represents the user input stream.

This way, you can let user-level programs define settings://, irc://, etc. First one to register it gets it. A package manager would be more suitable for settings://. The init system for run://. The filesystem for file://.

You'd need a distinction for the type (file, stream, shared pointer) of URI, so the kernel can reject file open on a non-file uri.

You do realise that syscalls have a limited numner of arguments? I don't see any problem with C strings, they even consume less memory than size-tagged strings. And if you need to support 0-bytes, strings are the wrong data type anyway.

That's a nice idea. I think that some protocols need to belong to the kernel though, such as file://. Files are so essential to the system that they are the default resource in the shell (if you omit file://, it's still treated as one). I think those protocols should only be used for resources, and RPCs should be handleld separately, as RPCs are an action, and not a resource.

I think that representing processes via numbers is the easiest way. Singleton processes could register their name globally ($$window-manager?), while non-singleton processes need to be accessed using my "async" construct.

>btw, single user or multi user ? I think everybody has their own PC by now, so that could simplify the design.

I did not think about that very much yet. Of course, multiple user accounts for shared machines (think of little kids owning a shared computer, because they are not spoiled) should be supported. However, simultaneous access from multiple users (like on a server) are more complicated. I think it would be good to design the system to allow this. I think there should be an isolation system: Processes can chose (or the invoking shell) whether they want to be accessible to other processes of the same user or all users (or a group). The same goes for processes that expose an uri:// name.

So I would enable it by design, but I wouldn't implement it (at least not as any immediate priority). The development focus will be on getting it to run for single users, while keeping the system design general enough to support more cases.

It would also be interesting to have support for distributed execution on the OS/middleware level.

URI copying is important, as you need to pass it to a system call. If you do not copy it, you need to at least map it into kernel memory. For 0-terminated strings, you need to read the whole string before knowing its bounds. This means that you do not know how much memory to reserve beforehand.

Not all syscalls are blocking. For blocking syscalls made by single-threaded applications, no copying is required. However, as soon as the caller is multi-threaded, even blocking won't help, as other threads could corrupt the syscal parameters while it is still executing. And for nonblocking syscalls, you definitely need to copy the input values. And if you copy strings, it is better to know the string length without having to read all of it.

Multi user for sure. I like the isolation users provide, which is admittedly a hack and should have its own dedicated permissions system, but still.

Talking about which, the permissions a program has are a very important matter. I think only the supervising user or a more privileged user should be able to cede a program their own permissions, via a simple config file the kernel reads upon launching the program. This should usually be handled by package maintainers to avoid bothering the user, but can be overriden by power users if needed. By default, programs have, at least, three permissions: writing to their own "open" area (think logs, temporal files, etc... anything non sensitive), reading from their own "closed" area (settings, which only the user or the program itself through some simple MAC/UAC mechanism can change; this is to avoid a program like Firefox going rogue if they ever manage to overwrite its about:config with malicious settings), and accessing the common area of the user, which would be the user home directory in unixland, minus the scattered config files. This way, programs can not alter the config of other programs (firefox can overwrite your .bashrc, let that sink in), they can not alter their own files without the user knowing, and they can still have full access to nonsensitive data, like media.

Bonus points if the OS also implements an "authentication server", by means of which programs can ask the OS to authenticate against a server or service using the passwords or keys stored by the OS, without the program ever knowing said password. Wouldn't be the first time I just copied and pasted my classmates' .mozilla folder in my desktop via ssh to access their passwords and copy their homework; regular programs shouldn't be tasked with something as important as storing passwords.

Unless you have a revolutionary OS architecture to avoid stack and buffer overflows: don't. And even if you do, still don't. Let's not fuck up a new system with mistakes of the past, the base system utils and libraries shouldn't encourage developers to use a format that requires an O(n) operation to know the length of a string, and where everything can go very wrong with an off by one error.

Named resources will be one of the main features that allow the OS to be easy to program for. I think giving programs the choice to register aliases under which they can be accessed is the way to go. The alias is then the program's public namespace.

I think that it is important to prevent alias hijacking, but I have no concrete idea yet how that would work.

I think that to establish trust with the user, a capability-based and resource based permission system is necessary. Resources have their own permissions, as do system functions. The capability system would not have much runtime overhead, as syscalls only have to look up an entry in a capability table. However, the resource permission system would be more complicated, basically everything would have to support having a more or less fine-grained permission set.

>authentication server

It's a nice idea, because you wouldn't have to trust your programs to not leak any passwords if they cannot access them. Something like Linux's crypto API would be necessary: The program can request the kernel to instantiate a keyed hash function, encryption scheme or random generator, with the kernel-controlled password as key. However, every access to those functions would have to be a syscall because of the memory isolation. This is problematic, performance-wise. Also, the authentication server would need to support many cryptographic implementations and network protocols.

I think that is not the case. What causes buffer overflows is if you use C stdlib functions that have no additional parameter for the buffer length. The main difference is that 0-terminated strings are streamable, as you can append to a stream until you put the 0-byte, whereas in size-tagged strings, you need to know the size before even sending the first byte of the string.

I am currently thinking of a specification for the (binary) object notation scheme in the OS's API, which is to be used to pass values to programs or return them. There, an extra string type is included, but if a program wants to output a large string (say, multiple gigabytes), then it is infeasible to let it generate the whole string before returning it. I am inclined towards using 0-terminated byte strings with an additional escape sequence to allow strings to contain 0-bytes.

The streamable approach allows even large values to be passed between programs with little overhead.

I improved the Object and process concept: Objects are written using the OS's object notation, and arguments are passed to programs as objects. Additionally, a process is passed an extra output stream to which it can output exactly one object.

Objects are inherently treated like streams, which makes it easy to pass them between processes (or even over the network, later on) without any complications, independent of their size.

Also, it seems that gitlab broke the links in the main README.md, you now have to navigate to the files over the code directory (Links in files inside specs/ seem to still work).

Every object stream can now only consume/produce a single object, if multiple objects need to be passed in one stream, a list can be used. Passed objects are read-only. I think this is a step closer to intuitive program invokation. However, I still need to make it possible to convert STDIN into a string / blob type Object.

Sadly, I have much to do, so I didn't get to work on the project recently. I did some thinking and came up with a few new ideas.

>Object notation in files

I am thinking about extending the Object notation to files, to have files also store only one object each (which itself can be a list of objects, of course). This would make saving and loading structured data much easier without having to write complicated file formats yourself. Files will no longer be treated as binary strings, but as structured data, and can be accessed as such. Of course, accessing files as raw data will still be possible, since files from other platforms do not conform to the object notation. If structured data is supported on the file API level already, then it will become much easier to have acceses such as (pseudo code):

$settings = open(settings.obj);print("Debug: ", $settings["debug"]);

Here, the file's contents are exposed externally as an object. A thing to consider is that simple appending to a file that already contains an object is no longer possible. In case the file contains a key-value map, writing to a new (or existing) key would be possible, if it is an array, appending would of course be possible. This will take a new approach on thinking about files, but in the long run, would be very helpful. If performance optimisations for a specific file format need to be made, of course, raw access is necessary.

>Journaling and versioning

I was also thinking about journaling in the file system. Commonly, either there is no journaling, or only file system operations (such as copy, move, create, delete) are journaled, to ensure the integrity of the file system even in the presence of outages. I think that in addition to structural integrity of the file system, even file content integrity needs to be ensured. I think that journaling is not practical for file contents, however, a versioning system for files would solve that. Instead of modifying a file, only the changes made are saved, in a separate place (i.e., all modified blocks of a file are stored separately from the actual file). Then, when reading the file, the file's content is reconstructed from some kind of binary diff. This way, even if you crash while writing a modification, nothing really happens to previous versions of the file. Of course, there would need to be a merge tool that discards the previous history of the file, to free up disk space. Maybe one could even do this automatically, via setting a maximum history length in a file's options or something like that.

>File concurrency and transactions

I am not sure whether this is a good idea, performance-wise and security-wise. However, I think if a proper concurrency control system was introduced, then concurrent access to files should be possible without sharing a file handle. I see this as potentially useful for database-like files (key-value maps and arrays, mainly). Maybe the whole file API should be transaction based, but I'll have to think about that a little more.

>System calls

I also thought a little bit about system calls, and I think that they should be interruptible, i.e., by context switches or keyboard input. I am not sure whether this is the state of the art. It seems that in Linux, the current way to do things is to just restart the syscall from the beginning if it is interrupted.

>The old way that unix manages things is using the idiom "everything is a file". This clearly doesn't work in the modern age, so it should be replaced: everything is a web server. When you want to edit a text file, you wouldn't execute the file containing your text editor, passing the file containing your text. Instead you would visit the website of your text editor, and give it the url of your text file to edit. Rather than complex and unnecessary windowing system, you would have a simple web browser with a number of tabs. Rather than having to argue endlessly about programming languages and graphics toolkits, ui would be coded with html+css+javascript. Rather than having hundreds of different systems for ipc (signals, pipes, fifos, sockets, etc), communication between servers would be done with good old fashioned get and post requests. The shell would be superseded by the search engine.

I am not sure how to handle file formats, though. Something like a template, maybe a rule-based grammar could work here. You'd have a context-free grammar, which you can then pass to the OS while opening a file, and it verifies that the file matches the grammar. The grammars must be stored in some shared location, so that any program can use them. However, context-free grammars cannot validate everything, and more complex grammars are not trivial to parse. In the end, I think that it's not possible to have the OS do full file parsing. I think that for every file format, there needs to be a library that checks integrity of files. However, since the OS supports typed values inside files, you can have portability by also writing your byte order in the beginning of a file (or in its header). If you open a file with another byte order, the OS can then automatically convert numbers to your byte order, while keeping strings, blobs, etc. intact.

Another thing I do not know how to handle is errors: If you have streamed object passed to a program, (i.e., some on-the-fly generated stream of random bytes), and then, suddenly, there is an error in the format, the stream is shut down (on the writer's side). What happens to the program that received the object's stream? Will it also be shut down? Is only the read stream shut down? Does it just receive an error? Is it signalled with an error message, or will it only see the error when trying to read more data?

That's what I was planning to do. However, existing file formats are sometimes too complex for extensive error checking by the OS. That's why I am still considering whether to add OS-side type checks or not.

Every user has at least one identity, which consists of a public and secret key, which are generated from his password.

Every file has its own (symmetric) key, which is used to encrypt its contents and name. An authenticated encryption scheme should be used for everything so that you can be sure that no unauthorised modifications were made to any file.

The file's name and contents are encrypted with the file's key (which is random). That key is stored in the file's metadata. For every identity that has access to the file, the file's key is encrypted and put into the metadata section. So if user A has access to a file, then Enc(A.pk; file.key) is stored in the file's key section in its metadata.

The list of available / used blocks needs to be accessible for every user, so it should not be encrypted. However, if there is a directory you have no access to, then you don't know which blocks on the disk belong to it. The disk itself can also be encrypted and protected by a password, but that has to be known to everyone who should have any access to any part of the disk.

Since file names are encrypted, you can store your own files in a shared directory, but nobody can guess the name of the file, its size, location (of its contents) on the disk, etc. This implies that file contents and file metadata are separate.

Since we use authenticated encryption, nobody can replace your files with fakes without being detected. It also makes it safe to share a drive over the network, as no information can be learned or manipulated by others. There also needs to be a central list of identities (or rather, their PKs) that appear on the disk, to save much redundancy. File access tags would then look like H(id.pk), Enc(id.pk; file.key).

The encrypted file system's security must be combined with further restrictions by the OS (such as enforcing read-write-execute permissions) to achieve security against malicious programs.

There is a problem with remote access though: Since only someone with access to a directory can see its contents, you cannot simply request a private file of yours from a remote drive, if the server cannot decrypt the directories, or does not know where the file is and how big it is. Maybe the directory contents, positions and sizes should not be encrypted after all, or only as an option. However, directory and file names would still be encrypted, although we would have to use deterministic encryptionfor that, which is not as secure.

Right now I am alone and don't have much free time for this. I am still looking for partners (here and IRL), already found one potential partner. It will take quite a while to produce anything usable, as much time will be spent planning the whole thing to make sure it is good.

Thanks, I am glad you like it. I've only known windows and linux all my life, so I am pretty sure I didn't experience much diversity in OS features. This makes it hard to think outside the box. So if any of you know some good features from another OS that you would like to see in my OS, feel free to suggest it.

Here, we first invoke an interactive program ("SQL"), and using the "async" keyword, we do not wait for its completion, but immediately retrieve its process handle. Then, we make an RPC ("table") to that handle, which is also marked async, since we want to interact with the loaded table. Think of the operation as loading a table, caching some of it, or whatever. Next, we query the loaded table to select its contents. This time, we want to use the result of the operation, not interact with it, so no "async".

Some programs or RPCs are inherently interactive ("SQL", "table"), and are designed to expect RPCs. In such cases, the async keyword could be made the default behaviour when calling (maybe by adding a flag to the executable, or when registering an RPC). This would save us a lot of effort when writing scripts that interact with programs. However, it would also create an inconsistency, as some programs and RPCs would block by default ("select"), and some wouldn't (depending on whether they are marked interactive). We would then still need to create an additional command that waits for interactive processes to finish, or to asynchronously execute noninteractive processes.

The system inherently supports lazy/halted evaluation of programs, as all returned values and passed arguments are streamed objects, so passing one program's output as another's input does not mean that the input program needs to finish before the other can start, and might in fact be halted until the other program reads data from the output stream. We'd need some kind of barrier or join operation to actually ensure that a program finishes before starting the next program. I think it would be good to make ";" an implicit barrier, and maybe add "," for jobs that can be run in parallel. Parallel would mean that they are started in the order in which they were written, but the next program may be started the moment the previous program is instantiated (as soon as its output stream and PID exist).

Maybe block statements should also act as barriers, and similar to C++, destroy variables declared inside. I am not sure though, whether it would be more reasonable to wait for all execution started inside the block statement to finish, or whether to just send some kind of "destructor" message to all programs that need to finish, and then just continue on with the rest of the code. This needs some further consideration (how to handle interactive and non-interactive programs, etc.

Let's be honest here for a moment: The CoC does not magically make the Linux code bad (I am pretty sure that the quality should more or less stay the same). So if you really want Linux, you can keep using even the CoCed version.

However, seeing that the Linux developers became cucks, this is a good opportunity to make a cut and switch to a completely new, not "good enough" OS / kernel. As Corporations are pushing to drive out the people who made Linux what it is and take control of free software, we should abandon Linux and switch to a better system. The good thing is that Corporations are only interested in "good enough" systems, so they will most likely leave us alone: It's not profitable for them to do something "the right way".

To design a good operating system you don't actually have to implement it. Since there isn't even a concrete plan of the entire operating system, it's reasonable for the process of programming it to not have started.

>select shouldn't cause it to halt. The print function / program should wait until the entire thunk is evaluated so it can extract the strings.

Not exactly. The script only continues after select() and print() finished. However, print() runs while select() runs, and prints the output as it is generated (on the fly). Otherwise, if select() outputted a huge string, it would kill the machine to buffer everything.

When you would run this "B" would be immediately printed. When the second print is executed it will now wait for all the thunks we created to be evaluated. In your original proposal, it would not print "B" until $results was completed.

i'm a transgender girl just getting into programming but im not the annoying type LOL! xD anyways, just lettin' u nerds know that im 100 procent on board and hope that once i find an alias and change my typing style, if anyone finds out that im a cute transgender girl, that there will be no bully plz

the point of this is to enact meritocracy instead of social justice virus, so it'd be totally coolio to ensure that hot traps with good code are accepted happily and that anyone with bad code is sent to the chamber of dooooooom! ^_^;;

>In your original proposal, it would not print "B" until $results was completed.

Please note that in my example, "select()" was not marked "async".

Technically, "async" would return a handle, such as a PID or similar, which can be used to interact with the execution of the invoked program / RPC. I thought about introducting the "await" operator to get the return value of an async operation. Then it would look as follows:

If the return value of an async value is not fed into another program as input (or discarded via simply "await $results;"), the async operation may be stalled if it tries to write to its output stream.

Cute progsocks? I just bought some pink ones, along with a full outfit set, including pretty little cat ears and a tail! The most proper girl is a girl that finds the most skilled nerd, and serves him utterly, while helping him find a female to breed with.

Normal females aren't really good companions for nerds, so it's up to us traps to ensure that not only are nerds not lonely, but that they also have a charismatic companion to net them a proper female for child surrogacy reasons. That way their genetic lineage doesn't die! Yay for being both skilled, and having a family. If a nerd can't talk to females, then I will, such that we both end up with a family, so that we don't have to die alone.

A trap is a necessary component for any technical project as well, for morale reasons. A bad, wicked trap can do the opposite, like the hideous code-witch Coraline. But me? I'm much cuter, seeing as I type anonymously here, and you can imagine any sort of cutie that you want, seeing as you won't see my face ever. So hopefully I can get in good with this project, and help see the rise of a new Operating System paradigm that not only defies the corporate wasteland of HR obedience, but also creates a platform upon which beautiful, elegant code can be run to do glorious things.

There are multiple components of any proper OS that are required. Now, we do not need to play "Let's be Linux Windows X."

We don't need to copy our competitors, if we are forging a new paradigm. What we need now is simple, and so let me write a list of some elements that I think are required. Most will be obvious, but it's still good to have it in writing. If you disagree, then let me know, and if you agree, then let me know. Let me begin.

A CLI.

A platform upon which to run ASM/C code binaries.

That platform will easily be upgrade via third-party software to also run any sort of binary package written in any language.

An OS must be compatible with different combinations of hardware, as well as the three popular architectures of CPUs.

This OS must be advertised to developers as soon as it is ready. Let's port Nethack, Descent, Doom, Quake, and yes, even fucking Minecraft, to it as soon as possible, and propose that this OS is not only a techy-hobbyist platform, but also a place for true gamers to find respite from the console wars and decline of modern AAA PC gaming.

It's ok to have sweaty armpits as long as we're onboarding excited new developers, right?

Let there be a software manager as Linux has, and ensure that as much GPL software is ported as possible, however, let us play a nice Apple move, and ensure that all software on the main package manager is vetted by us, to ensure that it is as bug-free and useful as we desire.

This OS must be considered, "Enlightened." If DOS was gen I, after the primitive OSes that came before it, then popular GUI OSes after DOS were gen II, and that basically makes OSes like Android gen III. Windows 10 and CoC Linux can also count as gen III. So we need to be Gen IV.

Thus, a name I propose is Genevieve. "Gen IV."

Gene refers to life-code, which is a great start, and "Vivian" was the name of the original Gamer-Gate mascot. I'm not a gamer, but I was part of the original exodus to 8chan after 4chan became a serious censor-pit cesspool.

So we have a chance for a (controlled) edgy fork called Memevieve, and a popular fork called Genevieve. The name is a classic call-back to the legend of King Arthur, which of course, is what this world needs: A king summoned by a wizard to right what is wrong with the world. And that is what this project is meant to do. All branches of modern computing have merged upon "Corporate Enslavement and Government Surveillance." We must free computing and give it back to the hackers and rebels, and to do this, we need a truly new beginning.

So let's have it be Genevieve.

If these points can be agreed upon, or modified with improvements, then we can break them down further into actionable points upon which to begin actually coding. Let me know what you think. ^_~

>I honestly think having an async keyword is dumb. It should be async by default.

Maybe in a very pure functional language;

but otherwise 'async' is helpful to indicate where the execution might switch to other coro, possibly changing global state, as opposed to non-i/o functions where you don't have to think too much about interleaving. Also, performance.

Agreed. However, I would not run standard C, but C with a different main function signature, as written in the specs (I think it was the process page).

>Driver support

Tbh, I'd first work on the technical stuff and then add drivers. Of course we will need them eventually, but it's not the most important thing. Even basic video and text mode would be sufficient at first.

>Advertising and porting games

That's way into the future. Of course, it would be nice to be able to play games on it, but I think that that's a double edged sword: The gaming industry will push users to install proprietary drivers, to get a few extra FPS or some special VFX, which is how the whole system begins to be compromised. I also think that the current gaming industry is at least as bad as the rest of the software industry, save a few indie studios. They long since stopped making fun games, and are now milking their customers blatantly.

>Sweaty armpits

Checked ;)

>Software manager

Yeah, seems important.

>strict auditing

Well, I don't really have the time for it, but as the community grows, I think that would be good.

>Enlightened / Gen IV

Well, I don't have a fixed vision for a fourth generation OS.

However, I have a proposal: I am heavily into decentralised/P2P and crypto. I think we should provide many privacy-preserving tools:

- A P2P file sharing program (probably one of the existing ones),

- TOR,

- A P2P image board / social network,

- A P2P chat / voice chat

If we have all these features covered, then users do not need to install any third party software to do what they need.

>Genevieve

You mean French pronunciation? Not sure if I like that. Since I already labelled it epOS, I take it you don't like that name?

>Vivian James

Hmm, not sure if I like it. I mean, she's cute and relatable, but I think it would somehow miss the point to take a gamer mascot for an OS. I think we need a mascot of our own. Something about not caring about / opposing current corporate culture, censorship, political correctness and surveillance. Cute and embracing our autism maybe. (You may come up with better suggestions, this is just from the top of my head). Or maybe something like a pair of mascots: A cute one, and a heresy purger fighting against proprietary software and corporate culture. Maybe a wizard with his apprentice witch?

I don't think that async default is good: The shell should be easy to use, and asynchronous execution is an "advanced" concept: to someone who just started programming, it is much more intuitive that everything they write gets executed in that order, like in almost all other programming languages. Furthermore, it gets more complicated, as you have to know whether two programs are allowed to concurrently execute or not, based on their side-effects.

Well, that's true, but that should not have happened yet. If I were (((them))), then I'd wait for a bit until all this CoC outrage dies down a bit, and then introduce backdoors bit by bit. However, I agree that the future versions of the code will without a doubt become more insecure as corporations get more power.

I do not degree with this. Normal computer users are very use to having multiple programs running at the same time. *NIX users don't seem to get to confused when they use asynchronous execution. *NIX even let's you pass lazily evaluated strings between processes.

>everything they write gets executed in that order

That would still happen. You just don't get the guarantee that the second application will start after the first finishes.

>as you have to know whether two programs are allowed to concurrently execute or not, based on their side-effects

What is the point of starting a new project from scratch? If you really wanna accomplish something fork Linux and start heavily modifying it (like changing the monolithic kernel design to something else). Like this guy said:

Every mainstream OS is basically built upon legacy code from the 80s and 90s, which means that they are still very limited at their core. The time to make a new OS from scratch that doesn't suck is not here yet, we are still stuck with flawed legacy operating systems that just have layers of new garbage added on top of them because no one seems to care or notice.

As I said, out of the perspective of a non-expert, async is way more complicated than just sequentially executing a program. Making this the default behaviour will require more effort to program sequential programs. It also assumes that almost all programs will be RPC-interactive, which I assume will not be the case.

>What is the point of starting a new project from scratch? If you really wanna accomplish something fork Linux and start heavily modifying it (like changing the monolithic kernel design to something else).

Just because I will most probably support x86, does not imply that I restrict the OS's features. Almost every feature can be emulated, even if there is no hardware support. Although it might make performance worse, it will create forward compatibility for the time when better architectures which have native support for it are released.

Right now I am reworking the article about the shell language. I am currently trying to figure out how powerful to make the shell language. If I want it to be a scripting language, I think that I should keep it simple: dynamic typing, no classes / custom records, no function overloading / type matching. However, I think that making the language more powerful would have many benefits, such as an increased productivity and safety (through something like static typing, type matching, classes / records).

This has multiple problems: The semicolon rule (barrier) enforces that $result is fully written before passing it to another program. Also, since ./something and something_other are separated by a semicolon, the value of $result needs to be cached in its entirety, so that it can be passed to ./something_other again. For large values, this can easily run out of memory. The same thing would happen if I allowed currying of values.

If I stay with the all objects are streamed approach, then this implies that every value can only be processed once at most (or multiple times, if done in parallel, and the slowest consuming program would limit all the other programs), and that values should never be actually stored within a variable, and only directly be passed to another function, like this:

./something(./generate());

However, this prevents a single value from being passed to multiple programs, if those programs are separated by a semicolon.

I think that the following might work: Variable declarations can be done in parallel (using "," instead of ";"), in which case the value is not really cached, but the program is directly piped into the program it gets passed to. However, if a ";" occurs, then we need to force the execution of the program to finish, and also to cache the whole contents of the value. A parallel version would then look like this:

Note, that ";" has been replaced by "," for the variable declaration and the program calls. If we kept the ";" in line 2, then ./generate would still be piped into ./something, but at the same time, it would also be cached, so it can be passed to ./something_other again. A problem of this is that the shell would need to perform a code analysis to know how long a variable lives and when it can be discarded. For small values, nothing of this is a big deal, but if large values are streamed as output from a program, then it is important to have a good memory management policy.

I think that this may have to be done via a "drain" operator, that can be used to pass a value as a stream to a program, and the value is deleted after use. It is similar to a delete operation, except that it allows one last usage of the value. Using this new operator, the first example would look as follows:

This has the most efficient usage of the value: It is first piped into ./something while also being written to a cache, and then, after ./something finishes, it is passed to ./something_other, and deleted bit by bit as it is consumed.

This is obviously an optimization technique, but can be very important at times. For small values, this can be neglected without causing any trouble.

Also, I think that "," and ";" should be nestable such as this:

a(),(b(); c());d();

This would execute a and (b followed by c) in parallel, and after that, execute d. This nesting can be used to improve the memory management of the shell. However, as ";" and "," are not very distinct visually, maybe I should change the syntax once more.

If you don't want your whole computer to only run one gargantuan, monolithic program, you will need some form of shell, that allows compiled programs to be composed / combined, and controlled. I don't really like this either, but I don't see a third way here. But if you have a third way, let me know. I would actually like to have a compiled programming language (like a custom C or C++ version or something), but I think it would be too much work. Maybe if more anons joined in on this or something.

It should have static types. Dynamic types make it more complicated on the programmer as they are the ones that have to make sure that two things are compatible rather than having a program tell them they are compatible.

>It should have static types. Dynamic types make it more complicated on the programmer as they are the ones that have to make sure that two things are compatible rather than having a program tell them they are compatible.

I am actually in favour of this. However, I think that this will easily result in a full-fledged programming language, because if I add static typing, then I might as well add classes, checked type casts (Object → class) as in typescript, etc. This will add a significant of work to my to-do list.

But depending on how the project fares, I can still decide whether to make it a full-fledged programming language, or whether to keep it on the level of javascript. I think I will specify it as statically typed for now, I can always change that later.

Well shells are a programming language. UNIX just offers you a bad one to use. LISP machines let you use all of lisp from the listener. There was even a C listener you could purchase that let you use C to do stuff on the fly. (A shell is a listener with less functionality) TempleOS had a similar thing, but used HolyC instead of regular C.

I respect your enthusiasm on this project, anons. I will drop my 2 cents.

In my opinion, by default a process should only be able to compute and execute only the most basic syscalls, like calling EXIT. Any bigger syscall should be given to a process by a capability (or by passing a capability by another process). Having a uniform FS is already too big of a security flaw - one that is currently addressed by the Google Fuchsia OS. There should be a tool to inspect the flow of capabilities within the whole OS. One should also think about network programming from the beginning, because big systems often encompass more than 1 machine - plan9 had a bunch of wonderful ideas.

The multitude of graphic libraries on Linux systems while none are easy to use causes an interesting phenomenon - almost no one writes graphical programs/demos/etc. Compare that to the state of Windows or Mac OS programs.

I am not sure what you mean by listener. What I am trying to do is to make the shell more like a real programming language. However, I am not yet sure whether to give it full capabilites of normal programming languages, or to keep it domain specific, since you can offload any calculations to real programs.

I think that this is not fine-grained enough, but it's a good start. For example, some programs should not be allowed to access any files at all, but others should only have access to some files or directories, and not others. Some programs should only be allowed to have connections to machines in the local network. This means that every syscall needs to be unlocked, but that inside every syscall, further capabilities are needed for fine-grained permission control. This obviously requires a more complicated system than UNIX's sudo, but I think that if you enter required capabilities into the application header, you can make it easier. And some permissions might only be needed rarely, which would require a runtime capability management system, where you can temporarily enable / disable capabilities.

>by passing a capability by another process

Do you mean that one process calls another process, and it has a subset of the capabilities of the calling process?

>Having a uniform FS is already too big of a security flaw

I don't see what the problem is here. I think that most current FS are not suited to fine-grained permission / capability control as needed by our OS.

>network programming

I think that the OS should support sharing resources, such as the file system, processes, etc. If you have authorisation, you can then treat resources on other machines as if they were on your machine.

>The multitude of graphic libraries on Linux systems while none are easy to use causes an interesting phenomenon - almost no one writes graphical programs/demos/etc. Compare that to the state of Windows or Mac OS programs.

What do you suggest?

I also think that a program should always have two consoles: one for OS I/O and one for the program's I/O. If you need to enter your password as authorisation, the program could just emulate the system's prompt to steal your password. This would not happen if that always happened in a separate console that can only be accessed from within syscalls.

I also think that every program should have a userspace and kernel space stack, because this would make syscalls interruptible, and allow longer syscalls, i.e. one that waits for the user to input a string or something like that (i.e., password), without blocking everything (afaik, in UNIX, a syscall is aborted when it is interrupted).

What if programs in this OS can know what capabilities they have by inspecting something (UNIX style systems would probably have a place with a file describing the capabilities, or multiple files representing the present capabilities)? I personally think a directory like /runtime/capabilities, with several booleans would be pretty cool, and each program should have their own view. Speaking of which, what about filesystem views? There may be a "real" filesystem underlying the entirety of the system, but each program has its own "root", and get a virtual view of the real filesystem, declaratively defined and built on each boot, which would allow programs to operate on "clean" slates where no other program can interfere with their operation, unless specified (by defining a common directory, for example), and to ensure they do not go around looking where they should not have to. Not only that, but having views shpuld make you able to conpeletely rearrange the way the FS is presented, while actually having a completely different underlying structure.

Stop trying to detract from the fact that /tech/ is all talk and no action. Whether or not I am a LARPer (Protip: I am, just like the rest of /tech/) has nothing to do with the fact that /tech/ will never create a non-cocked OS.

If the OS is intended to be "universal" as in it is supposed to run on any kind of personal computer with the myriad of PC hardware devices available today, I will predict that /tech/ will not produce such an OS. Such a project requires year upon year of dedicated programming work. If the OS is targeted to specific hardware platforms without regard to being conformant to the wider range of hardware (e.g. /tech/ approved Thinkpads only), then such an outcome is more probable for /tech/. Either way, my personal prediction for this project is all talk and no action.

You admitted to being a LARPer. You are LARPing as a plebbitor who hates LARPers but you are actually a LARPer too scared to stop LARPing and try doing something so you just LARP as a LARPer hating LARPer.

I think that the OS should offer a Capability management and detection API, where each capability is either an enum constant, a string or something like that, and you can just query it to receive a list of granted capabilities, or query a single capability to detect whether it is present.

>Speaking of which, what about filesystem views?

I think that the file system should only contain files. However, every process should also have magical directory of its own, which can only be accessed by itself and maybe child processes or something. I still have to figure out namespaces, which are to be used to expose RPCs, files, sockets, etc. to other processes. I think that every program should have its own private, protected and public namespace, in which it can allocate data. The namespace API should then allow sharing resources with other processes, either by giving them explicit access to a specific resource, or by placing a resource in the public / protected namespaces. Public can be accessed by everyone, protected only by child processes, and private only by the process itself. This kind of access control works okay for OOP, so I think we should be able to extend this to the process model without fucking everything up.

However, the namespace problem is not trivial: Should multiple instances of the same executable have access to the same private directory, or should they have separate copies? If you have separate copies, then how about private data persistent across multiple executions? I think a good idea could be that every executable has a private directory that is shared among all its instances, but every instance of it also has an additional directory of its own. This way, programs can be written to support multiple executions of the same executable more meaningfully.

Maybe the same should be done for other resources: If you open a socket or file, you can either open it in your own private instance, or share it among all instances of the program, or make it public to all processes, etc.

I don't think that the file system should be a process for several reasons: The file system is an integral part of the operating system; Without one, we cannot even load applications (they are just files, remember?). I also think that there is no harm in commiting to a single file system that suits all our needs. If you still need multiple file systems, you can still create your own servers for it like Fuchsia wants, but at least one FS needs to be supported natively. And that FS should be our own, with all the features the OS needs (such as capabilities, etc.).

What are you saying here? I never said that you can only load applications from the file system. I think it is also reasonable to load applications from other machines, or to generate the application dynamically, like directly executing the output of a compiler, without writing it to the hard drive first. However, to load anything from a file system, you first need to support a file system. This means that you must bootstrap support for at least one file system by putting it into the kernel.

<It serves a dual purpose: first, proving that our system is actually modular, and capable of using novel filesystems, regardless of language or runtime.

I am triggered that people write a file system for an OS in a meme language, and then call it thin.

reddit can't into distinguishing shitposts from real posts. I guess he was referencing >>1000419

>Hey... let's LARP that this OS is already written!

>hey guys, MrCode has just uploaded the rewritten TCP/IPv8 stack! ...and it's only using up 128 bytes of machine code! AWESOME

It's a picture of people LARPing. He actually says he's LARPing in the first sentence. TCP/IPv8 does not exist. And 128 bytes for a network stack is obviously impossible. And yet, he goes on to post about this on lellit, under his usual user name, revealing his power level to be close to absolute zero.

What I want to propose are robust sandboxing capabilities enabled by default. Let's see an example with some UNIX shell apps:

cat a b c > d

The cat binary needs an access to files a, b and c. In my opinion it's a flaw and the cat binary should be given FDs (like with "d" file). Otherwise this binary doesn't need to do anything beyond FD mangling. Most standard UNIX shell apps can be simplified to this.

Why is this important in my opinion? Would you run an untrusted application downloaded from the Internet? On Linux obviously not. But if you could guarantee that such an app wouldn't extract any private information without your EXPLICIT permission and the most harmful thing it could do is an infinite loop? If we want that to happen, then exposing an app to a global filesystem is a total no-go. Maybe it would be possible to do some mapping system:

./app map:(/local: ./lib/, /lib/libc.so, /auto)

Where /auto would work kind of like this to an app:

$fd = /auto/save_file

And it would show a dialog window (from a privileged context) to user to pick a location to save a file and return the fd of file opened to write.

>>The multitude of graphic libraries on Linux systems while none are easy to use causes an interesting phenomenon - almost no one writes graphical programs/demos/etc. Compare that to the state of Windows or Mac OS programs.

>What do you suggest?

An easy and integrated visualisation and controls library. On Plan9 the teminal is actually a scratchpad and is accessed with a filesystem interface. TempleOS allows you to draw graphics and even 3d models. I don't think it's really a great idea, but an ability to do such a call at least that easily is IMO a must:

Extending my idea from >>1014920 programs could only access files inside their private namespace, and any additional accesses need to be whitelisted. So, by default every program has full access to its own sandbox, and everything beyond needs to be allowed explicitly. Maybe you could pass files to programs and also give them capabilites, like:

./app(file(config.txt, "r"))

In this example, file is a special command that whitelists the file config.txt for read access. Or maybe you could have a file containing capabilities, which you then pass to a program invocation, and the program automatically gains all capabilities listed in that file. However, this file would need to be write-protected, so that programs cannot give themselves arbitrary capabilities.

>graphics example

I think that this is right, as it would allow you to create a window from a shell script (i.e., add elements to a window, handle events etc.). As the RPC system is quite powerful, you would only have to add these functions to the window manager's RPC interface. However, I am not sure how to handle onclick callbacks: We'd have to pass shell code to an RPC that would have to execute in the same context. So, somehow the shell also needs incoming RPC support, so that my shell code can receive calls from programs it started.

I have a better idea: Everyone in this thread makes the most basic parts of it (kernel, shell etc) to stop people from LARPing and in the end we vote which one is the best, forming a software distribution.

>The Code of Conflict is not achieving its implicit goal of fostering civility and the spirit of 'be excellent to each other'. Explicit guidelines have demonstrated success in other projects and other areas of the kernel. Here is a Code of Conduct statement for the wider kernel. It is based on the Contributor Covenant as described at www.contributor-covenant.org From this point forward, we should abide by these rules in order to help make the kernel community a welcoming environment to participate in.

I am serious. I am thinking about the project whenever I have the time / capacity for it, trying to come up with good design choices, which I then post here or in the repo. If you think that this is a waste of your time, because you want entertainment, just hide this tread.

This is how Windows 98 worked. Any process could access any other process memory and just deal with it. It was expected that the user be able to control their own computer. It was expected that users can distinguish between an executable with a virus and an executable without a virus.

It's the opposite: they can't influence your hobbyist project. You don't let them in to begin with. In fact, you don't even have to use a "free software" license at all. What's more interesting to me is shitlords learning to write their own OS, so they're no longer so dependent on big OS projects that require big teams, where there's lots of opportunities for infiltration and then subversion.

I will look out for usable code to fork, once I actually start programming. But I don't have my hopes up, to prevent getting disappointed.

>It also is quite hard to foresee all that is necessary for a fully functioning OS, but it is important.

exactly.

>Though, what is the motivation for creating your own OS? Security?

At first, I was just annoyed about the CoC. But then I read more into it, and noticed more and more flaws with the UNIX approach I had been tolerating / ignoring up until then. Security is one big aspect, but I actually want it to be as cleanly designed as possible. And security is a direct implication of clean design. That old the right thing vs it just works.

I am still unsure about whether to have a full-fledged programming language lik C or C++ as the shell language (possibly compiled), or whether to keep it simple and limit it to program invocations and some basic function support. If the shell language is powerful enough, then most programs could be written directly in the shell language. The shell would then run an interpreted version (possibly a subset) of the language. I think this could be very beneficial, since most programming languages are good at accessing the CPU, but bad at accessing the OS. On the other hand, if the shell language is not good enough, nobody will use it for more than little hack jobs, and then there is not much of a point in making it a full programming language.

I just came up with a mascot: I think a lich would be pretty cool. I'd call him Walter: In https://en.wikipedia.org/wiki/Alice_and_Bob Walter is the warden, protecting Alice and Bob. He'll be the OS's protector. Because I know you guys also want a female mascot, there will also be his pupil, Alice.

I think that taking names from the cryptography naming list is appropriate, as security is an important aspect of the OS. I think that there have been too many Merlin characters already, which is why I wouldn't want to call him Merlin. But if you can convince me otherwise, I'll listen.

Although Liches are generally viewed as evil at first glance, he is the good guy. Meanwhile, Eve is the evil eavesdropper fighting against Walter and Alice, trying to get to know all their secrets. The twist that a lich, who is commonly viewed as evil, actually is the good guy, is fitting because the government, who is supposed to be the good guy, is the one performing mass surveillance, so effectively, their roles are reversed here.

These projects are doomed to irrelevance. You're doing this for political reasons, not technical. Linus passed the CoC shit to make a point and to distance himself from other people he didn't want to be seen as being associated with. It's been questioned and edited officially many times now. Nobody has given any reason to doubt the kernel developers, at least not yet.

If you have dreams of a custom OS you'd be better off building a custom user space on top of Linux instead. It doesn't have to be all C and GNU shit. You can make an user space in 100% Lisp, Scheme or anything else you want. All it needs to do is make Linux system calls. It's not hard to put the concept of pointers into these languages.

I don't want relevance. In fact, I want to stay in a niche. My aspirations don't align with economic interests of corporations anyway, so they wouldn't use / contribute to my OS.

>You're doing this for political reasons, not technical

I said it somewhere down the line, I don't like the CoC, but the CoC was just the straw that broke the camel's back: I don't like how linux is designed, and want make a well-engineered OS I can agree with. I don't care if it takes me a decade or two to finish (depends on how many anons join). And although building on top of Linux will give me faster results, it will bring with itself all the legacy crusted shit that linux has.

>Nobody has given any reason to doubt the kernel developers, at least not yet

Linux Foundation is in the hands of corporations whose best interest is to destroy linux forever. Non-conforming kernel devs get banned.

As long as I have my vision, it doesn't matter how long it takes to finish it. I actually feel satisfied just by working on it. I think that this attitude is actually important, because otherwise I would be led astray by temptations like using linux just to get it done quicker.

The OS is the kernel plus all the essential utilities that get shipped by default. The shell is part of the OS. For an "easily programmable OS", it's important to consider the shell in detail. But I think that now I am almost finished with the conceptual OS design. More details will make themselves obvious as I actually develop the system, but since the general concept is more or less clear already, I think I will soon start programming / planning the code architecture.

I don't have a source to back that claim up, but it is pretty obvious that it will happen (if it didn't happen already). These tools for banning people will be misused to ban everyone who might be brave enough to stand up to the corporations that want to fuck the users over.

Then we have no reason to believe they will be misused. The whole kernel CoC thing happened in response to a hit piece journalists were preparing on Linus. The same people are still in charge. The same maintainers are still there. Until someone actually gets banned from Linux over bullshit there's no reason to stop trusting Linux.

Always remember that Linus's version of Linux is just one branch. The only reason Linus has power at all is trust. People think his specific branch is the upstream because people trust that he will do the right thing, they trust that he will do what's best for the kernel even if he has to shit all over other developers in the process. If Linus or his people compromise that trust in any way, they're the ones that will end up getting banned. Power is loaned to leaders and it can be revoked as soon as people believe their interests are no longer being served.

>Didn't they accomplish that already? I mean, why else would there be this whole discussion?

Not really. Most Linux users don't even interact with the kernel hackers. They could do so, but they don't. They pay companies like Red Hat to provide support, patches, custom features and shit. Those are the people who talk to the kernel hackers.

Most of the stuff this board hates actually comes from Red Hat. Shit like systemd, pulse audio or whatever. It's all completely inconsequential user space shit nobody cares about in the grand scheme of things. Kernel will outlive all of this stuff.

That's because they lack finances and manpower to actually fork it. People could avoid systemD theoretically but a very few distros actually did. How many do you think would survive a fork of the kernel?

Well that's best case scenario. Corporations are buying Linux because they want a FOSS kernel that doesn't cost them money and they fight out the money shit elsewhere. idk anon but it's sketchy as hell.

Even if, for argument's sake, Linux was not undermined, it would still not be an ideal kernel. It is written under the attitude that as long as it works, it passes. And they ended up with logical abominations like "everything is a file". Even though this is just one example, it is sufficient to make clear that Linux is not well-designed.

Well, I lack both as well, but luckily, I have no immediate need for the system to be running in some production environment.

>sketchy

Agreed. I can't trust anything that Google, Microsoft etc. are in control of.

Btw, I'd like to have some input on what you guys think of my proposed mascots (>>1016611 and >>1016613). I'll soon get myself a graphics tablet, so if no drawfag is going to make them, I'll try to. I think I can draw okay-ish, but I'll need some practice to create passable quality works.

P.S.: Soon, we'll need a new thread. I'd love to have a logo by then, but I think I won't have one in time.

>It is written under the attitude that as long as it works, it passes. And they ended up with logical abominations like "everything is a file". Even though this is just one example, it is sufficient to make clear that Linux is not well-designed.

Now that's an actual technical argument. I agree, Linux is not ideal. Some things about Linux just piss me off. What I personally get very mad about is the whole signals system and the fact there is essentially no way to do signals right, and also the lack of asynchronous I/O for anything but sockets. Fucking Windows has superior asynchronous I/O compared to Linux. You can't submit file descriptors and say "copy this to that in kernel space and give me another fd I can epoll to see if its done" oh no no no, you need a fucking user space thread-based GNU abomination library.

Even so, you can't simply throw the whole thing away. That's stupid. Linux is the greatest free software achievement ever. The sheer amount of drivers it has makes it invaluable and its all GPLv2. At the very very least I expect new kernels to port the Linux drivers for their own use, and even then it's a fuckhuge amount of work. And for what? Building an user space API that makes sense to you? Better to virtualize that shit. Programming languages and their implementations are a far more realistic goal. It would actually be interesting to see a 100% Lisp user space running on top of Linux, not just some Emacs inner platform abomination.

People like systemd because it makes life easy for them. Sure, the developers are insane and have a shit attitude towards backwards compatibility and fixing bugs, but that doesn't make systemd pure garbage. It's not perfect but it's not as bad as people make it out to be either. The thing that sets systemd apart is it was built specifically for Linux. It's not some least common denominator POSIX shit. It actually uses things like cgroups, namespaces and other exclusive Linux features.

The point is distro people actually like systemd. From their perspective, there's nothing to survive. It makes their maintenance much simpler and actually does a lot of things right. There's a thread on this board for posting shell hacks and someone posted a program that does stupid shit like double forking in order to "daemonize". It's these people who are fucking stupid, regardless of whether you prefer systemd or some other service manager. People do that because they literally do not understand how process management works, much less service management. And we certainly don't have to be stuck doing stupid shit just because some 80s neckbeards made a tradition out of it. Systemd, like many other service managers, does this right and it uses a simple declarative config file for each service that upstream projects can include in their repositories. Distros like this because it means less work.

Distros also ship multiple kernel versions. If Linux got forked, it would simply mean building an additional package. Unless the fork went full apeshit and started breaking userspace on something at which point it would be worthless anyway.

Please stop wasting people's time, you haven't done anything yet and I don't think you're capable of creating an entire OS (very few people are), also the reasons listed for creating your own OS are very weak I'd say.

Just think hard and deeply about what your motivation is and come up with a minimum viable product you can actually realize. Thanks.

If I can make a suggestion, lets make the most optimal version of a hypervisor like ESX.

We know everything is getting cucked, if we control the bare-metal OS we can control VM-OS/bare-metal network and have more control over what data flows out of our OS.

I'll help where I can but this isn't my specialty and I'll take a log time to get good enough to make something functional to get a POC working to get someone interested in contributing.

I installed rEFInd and have a bunch of oses on my mbook but it'd be GREAT if I could just get something ESX-like and switch between those operating systems but as a VM is fine so long as I have full peripheral use.

Well, as I said multiple times, I am not against code reuse, but the general structure of the linux kernel will most probably not be usable for me. However, if there are usable, well-designed drivers out there (I'll have to evaluate them), then I'll gladly use them. I am not trying to make it artificially hard for myself, and I don't care about not invented here. All I want is high quality code and design (and free software, of course). For beginners, I'd start with basic hardware support, and then focus on the feature side, and then come back to implementing drivers. We don't need fancy hardware features from the get-go.

>Better to virtualize that shit

That's not really what I want though. I know that this is a huge task, but I think it's worth the effort to make an actual OS out of it. Even then, you can still make a modified version of my OS that builds upon Linux, and is connected via an interface. But personally, I don't like the idea.

>100% Lisp user space

I am not sure what you mean. You mean that the shell language is Lisp? Or that all programs must be written in Lisp? That the user space is implemented in Lisp? I am not too familiar with Lisp, so I don't exactly get what you're trying to say.

>So, did I get this right, you want to make a high quality hypervisor instead of a new OS?

Technically a hypervisor is an OS, but yes. Doesn't mean it can't be expanded from HV to full blown OS, just that if you start as a HV we can compensate for all missing features of current OSes by running a VM.

Think HyperV DC edition, not much more than powershell and VMs. Once we have visualization with console access we can switch through VMs like screen sessions while we expand the HV OS to include things like browsers and whatever else people use these days.

>Thanks for the offer. However, if I understood you right, then I don't think we share the same goal.

I'm still trying to figure out your goals, you don't want gaming just a "futuristic fresh design" and decentralized codebase.

Sounds "good" but unclear what's different from forking a nix distro and keeping revisions decentralized. But if what I suggested doesn't seem like a good base to start, np. I'll get to it myself eventually

While I think compatibility with other file systems is key (ZFS has licensing issues, though), I'd like to consider writing one from scratch. Why? Because as far as I can tell, the only people to even consider what I want in a fs has been Microsoft when they tried making WinFS.

Allowing files to exist in a relational database allows you to organize data so it can be used in different contexts. Having one file in two folders means not fussing with symlinks just to have a consistent file structure while avoiding duplicates. Additionally, standardizing the way data is stored to allow software to easily extract it is a fantastic idea.

If we're truly breaking from convention then there's a lot of great ideas out there to be stolen from dead projects like Longhorn and Plan9.

So you want to develop a HV that hosts multiple OSes at once, and then use bits and pieces of those OS to form a new OS?

>decentralised codebase

The decentralised codebase is important, but only really comes into play once multiple people join in.

>you don't want gaming, just a "futuristic fresh design"

That's about right, I want to create a well-designed OS without any compromises on quality. I don't need it to have fancy support for games, although it's always a plus. Why don't I want it? As soon as gamers use your platform, they will push for the next technology AMD and Nvidia release to be added to the system. This is problematic, as they release proprietary drivers.

>What's different from forking a nix distro

As far as I know, none of the *nixes have been designed with quality as the top priority. I want to use high quality software only.

>But if what I suggested doesn't seem like a good base to start, np.

They way I understood your idea seems like it will result in a hackjob. The HV itself might be clean in design, but the general architecture of hosting multiple OSes and using bits and parts of their functionality does not sound clean to me.

Well, of course we need to support other file systems, otherwise, we would never be able to exchange files with other OSes.

>Write an FS from scratch

I think that's important as well, especially as we need the FS to support capabilities (for which usual file systems are not sufficient) and encryption (with key whitelist access control per file). I think that this also implies that there needs to be a file transfer protocol that supports all our FS's features (the access control needs to be copied as well).

>Files in a relational database

That's a nice idea. Additionally, since a file stores an object, and an object can contain an associative array, we can actually have files as tables (if they store an associative array).

>If we're truly breaking from convention

That's what I plan to do, although I don't want to make it forced. Not everything in Linux and other OSes is bad. A problem right now is that I didn't have any new ideas recently. I'll look into Longhorn and Plan9 (I already got to look into the latter a little). If you would like to make it easier for me, could you be so kind as to give me a few hints so I can find it quicker?

It seems like what you're suggesting is TempleOS with a better design. My suggestion is similar except it doesn't need a good design to start, just needs visualization and security to ensure the VMs are contained and firewalls from the bare-metal OS.

My question about forking linux is because you can strip anything you don't like and optimize it. ie: reverse engineer and restart with new priorities.

Either way I plan to play with templeos and plan9 to see how to integrate best of both then remove the stuff I dont see fit. Decentralized is essential but I don't know how to do that without a blockchain like system, otherwise it'll be centralized in some way.

>In general, I suggest you lurk moar, and recognize that the only way projects get done around here is by having one sperg do all the work, and have everyone else tell him what features to have. You can either be that sperg, or give up now.

Way to go, mr. anti- idea guy guy. We were on the verge of untold success. Now OP is an hero and I'll be taking my ideas and this cool logo/mascot that i drew and going somewhere less toxic...

>I am not sure what you mean. You mean that the shell language is Lisp? Or that all programs must be written in Lisp? That the user space is implemented in Lisp? I am not too familiar with Lisp, so I don't exactly get what you're trying to say.

Currently everything is either written in C or written on top of C. There's a tremendous amount of baggage associated with that. On Windows you're actually required to do this: you have to use the Windows API user space DLLs since the kernel interface isn't stable. You're stuck with this garbage user space you can't ever get rid of. Even .NET applications go through that shit at some point, either directly through DLL loading or indirectly through the virtual machine.

On Linux it doesn't have to be that way. The kernel interface is not only stable, it's built directly on top of the architecture's instruction set. You set up some registers with parameters, issue some system call instruction and get the return value on another register. It's simple, functional and breaking it causes Linus to publicly shit on the person responsible on the LKML.

The point is you don't need a GNU C library or anything of the sort. You can have your programming language run directly on the kernel with nothing in-between. You can write a JIT compiler that can generate the system call code for your programs at runtime. If you can make Linux system calls, you can do anything. You can do I/O, memory management, process management, audio and video... You name it.

Doesn't have to be Lisp. You can do this with any language. Used Lisp as an example because it has a history with operating systems that people want to revive. It's actually a very realistic goal, as long as you build it on top of Linux instead of doing it from scratch.

We don't need a new OS. Linux getting the tranny-coc doesn't matter as Linux was always shit for other reasons.

Windows is the best operating system. It just works. You can do everything on it. You can even sit down a mongolid pajeet in front of it and a few months later he will be at a usable skill level. If that isn't perfection, I do not know what is.

Android is the attempt to unfuck linux. It was a good attempt but it will never really work as Linux can't really be unfucked. At least it improved on it.

Linux doesn't even have video decoding acceleration in it's browsers. This is a 10+ year old wontfix at this point because it's entire userland is just that much of a complete mess.

>I think that using GPL3 or even AGPL3 is good though, so that the Microsoft EEE paradigm doesn't apply.

No, even better is a modified two-clause BSD license which includes a third clause that requires its contributors and users to denounce "genders" beyond male and female, and to denounce trans-gender as reality, and that Hitler is world führer and all praise of Hitler must remain user-visible at application start and in any about or version screens/outputs.

Well, TempleOS was intended to be a modern C64 equivalent, and that's not entirely what I want. I want a complete system of very high quality, suitable for any task. The C64 goes in a slightly different direction, although I don't exactly know enough about it to say what the difference is.

>I'm not sure how you're coming up with calling my idea a hack job

>My suggestion is similar except it doesn't need a good design to start.

>doesn't need a good design

If I start out with a working, bad-designed prototype, I'll grow complacent. That's why I don't like the approach.

>Either way I plan to play with templeos and plan9 to see how to integrate best of both then remove the stuff I dont see fit.

Good luck with that. Keep us up to date with it, it sounds interesting.

You mean to create a programming environment that directly accesses the kernel via assembly, not via additional abstractions?

I don't see any problems with this, although I don't see a problem with having a C / whatever library that wraps around system calls.

Going through a DLL adds one layer of indirection to syscalls, which can be argued to be inefficient, but on the other hand, the kernel interface is no longer required to be constant. This might actually be a trade-off worth taking, depending on its upsides.

>No GNU C library, directly access the kernel

The C library is there to make it easier and more readable to access the kernel (and fulfill other C standard functionality). You can also link your programs without it, but then you'll lose your platform independence, as your code will no longer rely on standardised functions only. Which is okay if you only target epOS anyway. Btw I don't like the C standard either, it has many unnecessary things like reading files in binary vs text mode etc. which come from legacy OSes and are still there for compatibility.

kek. I don't wage war on mentally ill people. If I did this and my power level was ever revealed, it'd ruin my life. Even when "anonymous", I am still careful about the possibility of making a blunder. So I won't do nazi, pedo, terrorist or other stuff that would get me jailed / lynched / killed if revealed.

Not really. It forces a language on you. It's full of baggage, just like C itself and its standard library. They can't really be changed without breaking everything so they will remain bad forever and you're forced to deal with it. Linux doesn't care what language is making the calls so long as the calling conventions are right. Linux maintains old interfaces but you don't have to use them. The GNU C library is the closest WinAPI analogue and not even that is required.

That's exactly how everything works in Linux. The only interface it knows are some specific registers and the syscall instruction on x86_64. People link against a big ass C library instead of doing it themselves, and that is the source of a huge amount of annoying shit. The Linux interfaces are actually much better designed compared to the C standard library.

Instead of using a C library, you could make the compiler emit the assembly required whenever a kernel operation must be performed. There need not be any C at all in user space.

>the kernel interface is no longer required to be constant. This might actually be a trade-off worth taking, depending on its upsides.

Don't ever do this. You will break user space.

>Which is okay if you only target epOS anyway.

There's no reason to care about other systems. Portability is overrated. Better to make the most of the system you're using.

Try it. Try writing a program without any C in it. Try to avoid linking to one of the hundreds of craptastic Windows DLLs scattered all over the system. Do people even know which's the correct one, versions and all? I've forgotten by this point. You just can't do it. You can use the kernel interface but your program will just break next time they fuck around with the calling conventions. You're condemned to windows DLL hell instead: even the user space DLLs aren't stable so every program has ships their own. Gotta import a shitheap of C code and all the baggage that comes with it. Baggage that is going to follow you all the way to the high level stuff.

In an earlier post, I said that every executable has its own sandboxed private directory, and additionally, every instance of an executable has a temporary private directory. Process instances all share the executable's private directory, while their own private directory is not shared among processes.

I took this concept further and created a singleton or default instance for every program: If you RPC a program (the executable file) instead of a specific process, then the program's singleton process is targeted. If it didn't exist yet, it is created by invoking the program without any arguments. Then, the RPC is executed. Whenever a program is executed, if it had no previous singleton process, the new process is registered as the program's singleton process. This way, you don't need to remember the process ID of a shared service, as this is handled by the OS. It would be used as such:

./shared-executable->do_something(args);

Next, I thought about shared resources. As described in an earlier post, I plan to have shared resources accessible via URLs, the protocol identifying the type (i.e., a message queue would then be queue://something). I think that a process's shared resources should be located within its namespace. For singleton processes, this is very simple:

read(queue://path/to/executable/queue-name);

This looks for a queue resource located inside the singleton process of the executable. However, in some cases, programs might want to share a resource at a global locations, so that other processes can access the resource without having to know what executable's process it belongs to (imagine a message queue in /mail/mail-address-1 to read incoming mails, regardless of what mail program is used to receive the mails). I'll also have to think about persistent resources, i.e., those that should survive a power outage and be present after booting. An example for this could be a mutex that is locked by a remote machine, in some scenario it might be important that that machine still owns the mutex even when the server crashes. This means that resources should have a lifetime assigned on creation. Persistent resources would have to be stored on persistent storage, so that even during outages, they keep their last state. Maybe I should make a cached mode and an uncached mode for persistent resources, so that uncached persistent resources are never cached in RAM and always directly updated on persistent storage, and cached persistent resources are held in RAM, and only written to the persistent storage after the process exits. Of course, temporary resources are never written to the disk and are destroyed when the process exits.

I just happened across the uniform driver interface (UDI) project. RMS seems to reject this (https://www.gnu.org/philosophy/udi.html). However, it seems to be technically well-designed (didn't read through it yet, only read the summary). I'll need to evaluate it further before coming to a decision, but I think I might adopt the standard, since it promises efficiency and safety.

Looks pretty cool. The GNU article is short sighted, though. The free software community can benefit greatly from this if you stop assuming Linux is the only kernel out there there is. HURD could be near usable if UDI became commonplace, using Linux drivers, and so could Genode OS, which would be cool as all fuck. The article even mentions ESR mentioned making the driver public source as part os the spec, which would mean things would stay pretty much as they are now in the proprietary driver department, but benefit the libre driver department hugely.

I am not sure how to handle access control yet, though. I think that having a public/ directory might be an easy way for sharing resources with the outside world. I'd make the process's namespace's root write-protected, and make it contain a public/, private/ and shared/ namespace. public/ and shared/ can be accessed by all outside programs, while the private/ namespace can only be accessed by the process itself (and probably by processes it explicitly gave permission to). Both the public/ and private/ namespaces are temporary and are destroyed with the process. The shared/ namespace is a reference to the process's executable's shared namespace (which is also the singleton's namespace).

>I am still unsure about whether to have a full-fledged programming language lik C or C++ as the shell language (possibly compiled), or whether to keep it simple and limit it to program invocations and some basic function support

Shells are supposed to be as terse as possible to enable the shortest programs possible. The fact you're even considering a real programming language for the shell already makes me doubt your skills. Why the tuck are you even discussing shells to begin with? Is your OS booting and running programs already?

The shell is an important part of the OS to consider, especially if one of the top goals of the OS is to allow easy and intuitive programming. I was thinking about what would happen if I made it as powerful as C++ for example, and whether that would make programming on epOS better. I have not decided to do that, yet. It would be another huge addition to the work load I'll have to do, so I think it is most probably not worth it. Still, if there's any big benefits to it, I'd still do it.

So what do you suggest instead of a shell language? How do I invoke a program with custom arguments? Through a GUI? Or do I have to write a program in C (or compatible language) for that? The way I see it, a well-designed shell language is very useful if you just want quick functionality, and mediocre speed is sufficient. But if you gained some enlightened I didn't, mind sharing?

Yeah, looks like you know nothing about programming languages. You ``cannot`` do what you're "considering". The design goals of real languages and shells are not just different, they're in total opposition to each other. At least that's true if you actually want your shell to be useful to people and not a verbose piece of shit.

Of course it will be a huge addition to your workload. You probaby don't even have a goddamn basic kernel running yet, much less actual user space.

If you only consider lines of code as work, then no. The programming phase didn't start yet. I take the design of my OS very seriously, and if you cannot appreciate that, then you should probably ignore this thread.

The example of PowerShell shows that you can have very feature-rich shell languages. I personally think that a turing-complete shell language is very useful. How close to the actual machine a shell language should get is another open question: You could even take something like Javascript as a shell language if you were feeling funny. You might even go as far as TempleOS did with HolyC. Although both Javascript and HolyC are not good shell languages, as they do not have OS functionality built into them at the language level, this should have made clear that shell languages and "real languages" are not in total opposition to each other. It is just that often, shell languages are very minimalistic and only focus on accessing files and invoking executables. Creating a "real language" as shell language would be a very large task, but would result in a programming language that can be used to easily program for epOS, but without limiting the functionality. I actually have to agree with the Lisp anon here, on the point that a full-fledged shell language is worth considering (although I don't like functional programming languages as much as imperative languages).

>You probaby don't even have a goddamn basic kernel running yet, much less actual user space.

Of course not. I didn't start programming yet.

>Fool. Unless he posts a git repo and you literally compile and run the thing on qemu, this thread's nothing but mental masturbation.

>mental masturbation

Call it as you will, but I think that the design needs to be thought out properly before programming. If I just started programming, then the further I got, the harder it'd become for me to change something fundamental again out of complacency. I don't want quick results but something I can be proud of.

>The example of PowerShell shows that you can have very feature-rich shell languages.

Way too verbose, even though it was actually designed properly. Have you actually written any power shell scripts? They're 5x as bad as bash scripts just because of the way they name commands and bash isn't perfect either. Shell languages are the complete opposite of real languages. C scales up to millions of lines of code. Shells scale down to single lines and as few characters as humanly possible. You do not want to write anything complex in shell languages. They exist to direct I/O. They're there so you can tell the kernel where to connect the pipes and absolutely nothing else. They are data flow languages. It's extremely cringeworthy to look at even simple text processing in bash scripts, you cannot fathom how much the power shell equivalents suck.

I know you think this is the "right" thing. It isn't. If you can't accomplish what you want with one line of shell code, you're not supposed to use the shell to do it.

Lisp is actually very simple and minimalistic syntax-wise. It still manages to be more verbose than bash, FYI. Why? Because it has syntax to distinguish between multiple types. On posix shell, the token "1234" could mean a string, a number, a file nzme, a file descriptor... It makes life very easy for the user by pushing all the parsing work on the programs themselves: it's up to them to figure out what the fuck "1234" means. Even if you used a minimal scheme for the shell it would be shit because you'd be quoting and unquoting every fucking parameter or setting up objects before using them, and even that's already more trouble than what *nix shells give people.

There are certain things you know going in you're going to need. You know you'll need a kernel, so you can get to coding that right away. The kernel is going to need drivers and multiprocessing and scheduling, so you might as well add those things in. You're going to want a hierarchical filesystem, so you can start coding that right off the bat, etc. Having a solid base around makes it really useful to prototype. You could have realized what a big mistake you were making regarding the shell all by yourself if you'd had the ability to try out a version of it and see how unwieldy it is. Instead, you're stuck suggesting fucking javascript as an alternative.

Another point: in the time you were jacking off here, you could have learned OS dev, and written an initial prototype. Your first try is going to be trash no matter what, you might as well get it out of the way.

He could discuss the inadequacies of shells all the time if his project was a new user space for Linux or something of the sort. But no, he wants to make an OS. Might as well brainstorm the GUI too while we're at it. Let's make it work and look like the ones we see in the movies? I saw a project like that on github once and it was cool as fuck. I could literally hear the military briefing music on the air. Written in electron, too. Should work well with a javascript shell.

That GUI is a cool idea. I really like that. I think that a really stylish GUI can make using the system more fun and rewarding. It seems like most movies envision hologram interfaces. On glossy screens in particular, those have a very special appeal.

>javascript shell

Javascript was actually an exaggerated example. Javascript grew to be a useful and feature-rich language, but its fundamentals are flawed (see all those inconsistencies, like what happens when you do [] + {}, or {} + [], etc.). I'd like to see something as mighty as Javascript, but with cleaner design and static typing.

>Static typing is important and allows syntax highlighting, static error detection, and more. It has a big impact for production-level code.

Never said it wasn't important. I said your shell was going to be garbage. Also, that's not static typing. I don't know where the fuck you got that idea. I was talking about the fact Lisp has syntax for different types like strings and numbers while bash doesn't.

How do you get static typing if no expression has an identifiable type? Or do you suggest to support numbers etc., but everything is astring by default and i'd have to manually parse integers? That's retarded. Also, I created the object system to simplify communication, so why shouldn't my shell support that?

I was also thinking of using a semi-microkernel: modules that are known to work (or at least didn't reveal any bugs) can be grouped together in a single process, without having to reprogram them. This makes communication faster. So the module does not need to know whether the module is isolated within its own thread. Basically, a module is a coroutine, and every module process has a scheduler.

I did some more thinking, and I noticed that you could remodel the module composition at runtime. However, one problem is that you'd have to manually map the code of the modules, depending on what module is currently executing. However, that should be reasonably easy to accomplish.

>How do you get static typing if no expression has an identifiable type?

I'm NOT talking about typing discipline. I'm talking about syntax. Do you understand? When the Lisp evaluator sees 123 it produces an integer with 123 as its numeric value. When the Lisp evaluator sees "123" it produces a string containing the characters '1', '2', and '3' in that order. You have to type in different things in order to get different types of objects and there is extra syntax for every supported type.

>Or do you suggest to support numbers etc., but everything is astring by default and i'd have to manually parse integers? That's retarded.

That's how *nix shells work. Everything is a string, so the user types 123 if he means the string "123" or the number 123 or the filename 123 or the file descriptor 123 or whatever. It's easy to use and hard to build an interface around because parsing input into data structures is a hard problem not every developer cares about.

>Also, I created the object system to simplify communication, so why shouldn't my shell support that?

Because it will be verbose and hard to use. If you can't outcompete "worse is better" *nix shells you're going to come up with a bad shell language and a bad systems language that sucks at both tasks.

I think that in order to implement the semi-microkernel architecture, the kernel needs to support DLLs. Modules need to be position independent code, so that multiple modules can reside in the same address space. As a performance optimisation, I could even load object files, and link them together when loading multiple modules into the same process. I have to identify the interface modules require. I'll probably need a whole library of communication primitives that work across processes, as well as within a process. I'll also have to think about whether true threading or coroutines/fibers will be better. But even this aspect can be hidden behind an abstraction layer. I think it's best to program everything in a cooperative multitasking way, so that it can run in a single thread, but if it's run in multiple threads, it runs on preemptive multitasking instead, and those cooperative yields are nops.

What objects does the program receive in its main function? Is it a number, a string, and a path, or three strings? Or a number, and two paths? Three paths?

If there is no syntactic distinction, then the program needs to figure out which argument has which type. Therefore, it is important to have a distinct syntax, so a program can just query what types of values it received.

>outcompete "worse is better"

That's simple. A cleanly designed language will be much more appealing than a piece of hackery like bash. At least to a newbie. Those who already learned to be content with bash will prefer bash of course, but those people wouldn't want to use my OS anyway.

>I think it's best to program everything in a cooperative multitasking way

You can't be serious.

>If there is no syntactic distinction, then the program needs to figure out which argument has which type.

Wrong. They always know what kind of data they're expecting based on the position of the input. If you write a program that treats its first argument as if it were a number, then that's exactly what the first argument's type is. The thing is the program must convert the input data (ASCII or UTF-8 encoded text) into an actual number.

You want the OS to do this parsing work. A respectable goal. You're going to pay a serious price though. You're going to make your OS interface into a programming language instead of a shell. You really don't want to do that.

>A cleanly designed language will be much more appealing than a piece of hackery like bash. At least to a newbie.

Users get annoyed by things as small as the need to quote paths that contain spaces. You want to have them quote everything instead. You haven't even started yet and I already know it's going to be a pain in the ass.

I was talking about cooperative multitasking that can be turned off if a module is alone in its container process. So if you group 2 modules, they run cooperatively, gaining a huge efficiency boost for communication. When you have only one module in a process, then it ignores the cooperative multitasking "yield()" calls, and instead performs preemption. You need to understand that message passing across threads/processes is very expensive, and the biggest bottleneck of the microkernel approach. I am proposing a kind of microkernel where you can group multiple modules within the same container at runtime, to make them communicate faster, which results in a faster kernel, while still preserving most of the microkernel architecture's benefits.

I am still planning the general architecture, and right now I am planning the design of the kernel. When I have that done, I can actually start writing the kernel. Then comes the kernel modules (basic drivers etc.), and the shell.

yield() is a function that gives control from the current execution to the next. In a cooperative multitasking environment, you need to use yield() to pass execution to the next coroutine. To allow for maximum efficiency, communicating modules grouped within the same process should be coroutines that manually yield() to each other. However, if you use the pure microkernel approach, yield() does nothing, as every module is in a separate process. But as I want to allow custom grouping of modules into processes, the modules need to use a mix of cooperative multitasking and preemptive multitasking to communicate. This approach has the benefit that you can still isolate some more likely to fail modules in separate processes, but combine well-tested modules for performance gain.

So instead of threads you have "modules" which are actually coroutines inside a single process. Kernel preempts processes but "modules" yield to each other inside the process. Sounds like a standard userspace coroutine system. God I can't believe you're going to force this kind of complexity on user space programmers. Even worse, you're apparently doing this to offset the performance penalty of your message passing microkernel design. It's just ludicrous.

If there's only one module, why the fuck would it ever call yield? Code relinquishes their time slice by performing blocking system calls. It's safe to preempt early if your process starts doing I/O instead of computation because it's obviously not using the CPU anymore. There is no need to ever yield to a preemptive multitasking operating system. Why would a call to yield() even be present in application code in these cases?

You should watch that video. He talks about how full of pride the Linux people are. And now Linux is getting SystemD'd and CoC'd, but TempleOS never can be. Ditto with other hobby projects with only one guy in control.

The kernel modules are coroutines, they do not run in userspace. Userspace programmers don't see any of this.

>why call yield?

Because the same code should work for a single module per process (which is the most secure but slowest), and also for multiple modules per process (which is faster but less secure).

>There's no need to ever yield to a preemptive multitasking operating system.

Yes, but if you have multiple coroutines in one process, you'll have to do it. Coroutines are the most efficient execution form at message passing and communication.

>Why would a call to yield() even be present in application code in these cases?

It wouldn't be. Applications do not need to manually yield (except in some corner cases). I am in favour of a preemptive multitasking userspace and kernel space, but modules themselves need to use cooperative multitasking if they are within the same process, signalling otherwise. By supporting cooperative multitasking, I make my code independent of the actual kernel layout, it won't care about whether a module is alone within its process, or whether it is grouped with other modules, it will still work in both cases without recompiling.

Exactly the reason why I chose the microkernel architecture. I want that separated processes (containing one or more kernel modules) do not fuck up the rest if they fail. If one process containing multiple modules fails, just split them up, so that the next time one of them fails, the rest is not taken down with it (even though the system will run slower then).

He demonstrates just how unwieldy context switches are, and that is exactly what makes microkernels so slow. My approach lets you decide where to isolate / encapsulate, so that you can trade safety for performance at performance-critical junctions, without having to rebuild your code base.

Yeah I saw that. Pride's not unwarranted. Linux is the most successful free software project ever. It's the most flexible OS ever -- you can replace all the user space with whatever you want, even completely custom shit. It moves so fast, companies are encouraged to merge drivers into the Linux tree because otherwise they'd get left behind. TempleOS is stuck trying to spin its lack of memory protection and networking into features.

Systemd is not Linux. It's just some user space application. That's like saying Linux is getting Gnome'd or KDE'd.

They have Tso by the balls. He's singing their tune now, just like Linus. It's very obvious if you read their mailing lists on topics about the CoC. Basically the whole project is subverted, and anything Intel wants to push won't get any resistance anymore.

They even made an interpretation guide. In my opinion if you need an guide to understand some text it's probably garbage but hey they're trying. Linus is still shitting all over people who do stupid shit. I've seen it. He's nicer now but still pretty brutal.

Save your panic for the day Linux actually gets fucked over by these SJW faggots. Today is not the day.

(To understand the next ramble, it might be helpful to read the article first.)

After designing the module / container model, I had an epiphany: If kernel modules can be grouped, why not group real programs into one process? Take, for example, a pipeline of multiple programs (such as in a compiler). Why make all stages of the compiler pass messages over IPC via context switches and whatnot, if they could be composed to be located within the same process, where each program is basically a coroutine (lightweight thread), and message passing is much more efficient. Of course, programs relying on the assumption that they runs in a process of their own are not compatible with this approach, and need to be kept within a process of their own. Although one faulty program can fuck up all programs within the process, that's essentially the same within any toolchain: If a single link breaks, the whole chain breaks. Thus, I think it would be justified to have multiple programs reside within a process as lightweight threads for a potentially massive increase in performance.

This needs special consideration of the software programmer: Manual yield points to pass execution between coroutines are needed (I also explained this in the article). Note that there is a difference between coroutine yields and process yields, as one is lightweight, and one is heavyweight.

No. They're not threads. Threads are different from coroutines. Multiple coroutines run in one thread. They are a more efficient form of multitasking, if done right. Since I plan to use a modularised kernel, to offset the crippling performance penalty gained by the "one module per process" paradigm, coroutines are necessary.

>implicit obligation to not fuck it up

With everything you do, you have the implicit obligation to not fuck it up. Less so in strict microkernels, but you trade obligations for performance penalties in microkernels. So I think this is a valid approach. Also, I never claimed to have invented threads or coroutines. I merely used those techniques in places where they were previously not used.

Programs will need to be compiled as position independent code, so that I can load multiple programs into the same address space. Programs will always be loaded into a process with a coroutine scheduler that is given by the OS (or maybe customisable? Anyway, there will always be a scheduler inside every process that is not part of the executed program). The scheduler is responsible for message passing and keeping track of what programs / coroutines wait for what resources. This is done by supplying a library containing things like futures, mutexes, condition variables, message passing etc. to the program. The program does not directly access the scheduler (except when manually yielding), but does so implicitly through accessing those primitives. The programs use the library to communicate with other programs within the same process and in other processes, without having to care where they actually are. The library optimises the performance by only contacting the kernel if necessary, and handling intra-process communication itself.

I am currently trying to figure out what architectural decisions are necessary/important to enable distributed computation. Does anyone have suggestions on this?

>OP, did you work at some IT or programming job or are you a wagecuck?

No, I am a university student. Right now, I have lots to do with the upcoming exams, so I didn't really have any time to work on this project recently. However, I think in a month, I will have much more time to spare for this.