I was playing with compiling C++ and Rust to WebAssembly and running it in a browser. WebAssembly is a bytecode that gets JITed by the browser and runs at near native speed.

I was also playing with offscreen canvases inside of web workers. Web workers run in parallel and are relatively isolated and safe to terminate. Canvases give you hardware accelerated 2d, 3d, or raw bitmap access.

This made me think I could port my userland libraries to run in WebAssembly. Then:[*] In my bare metal kernal, interpret (maybe one day JIT) the WebAssembly and the system calls invoke real syscalls.[*] In my web kernel, let the browser JIT the WebAssembly, and have a shim inside the web worker that turns the system calls into Web Worker messages.

Then programs that run on my OS could be compiled to WebAssembly. My window manager could use hardware accelerated canvases.

You could literally run your OS inside of a modern web browser on any device. If you had a package manager/app store, you could have a "run now" button that anyone could try out programs for your OS in their browser.

Yes I this could be done. But does it worth it? I mean what would be the purpose of a kernel running in a browser? That browser is a userland app which needs another OS anyway, and the applications running under would be way too limited (much more limited than an app running in real vm).I think this would make only sense if you create a minimal kernel that implements WebAssembly, and you could run your apps with that. Like SanOS or the Dalvik engine in the Linux kernel. Considering how much must be done to get WebAssembly and canvases properly on bare metal, I really doubt this worth the time.

On the other hand as a research project the idea is interesting, but I'd like to remind you this is not the first of its kind. Both JavaApplets and Flash failed (former iterations of the "running bytecode in the browser" theorem). JavaApplets are here for about 20 years now, most modern browsers can run it JIT compiled and 3d accelerated, yet somehow almost nobody uses them. Flash was proprietary and full of bugs and bullshit, yet it used to be widespread at some point and disappeared without a trace (tbh I don't feel any emptyness left behind). How is WebAssembly different to those?

Edit: according to Wikipedia, JavaApplets are deprecated too and support removed in 2018. Lived 23 years without being noticed by the mainstream.

WebAssembly has been standardized by the w3c and all the major browsers (firefox, chrome, edge and I believe their mobile equivalents) support WebAssembly (and Canvases, WebGL, etc) without needing browser plugins, so hopefully it stays around a while.

I don't see the majority of websites using WebAssbly anytime soon. Perhaps some niche use cases such as a real estate site putting a virtual walkthrough or a browser game (Unity now supports deploying to WebAssembly), or perhaps a few HTML widgets such as a 3D pie graph.

WebAssembly has been standardized by the w3c and all the major browsers (firefox, chrome, edge and I believe their mobile equivalents) support WebAssembly (and Canvases, WebGL, etc) without needing browser plugins, so hopefully it stays around a while.

I don't see the majority of websites using WebAssbly anytime soon. Perhaps some niche use cases such as a real estate site putting a virtual walkthrough or a browser game (Unity now supports deploying to WebAssembly), or perhaps a few HTML widgets such as a 3D pie graph.

So we agree that writing a kernel in Webassembly is nothing more than a (very) interesting experiment without probably no practical benefits. Don't get me wrong, experiments are important, we just should handle them as such.

In this regard you made me curious what could be done in Webassembly OSdev-wise. You can throw rotten eggs on me for saying this, but maybe a SUBLEQ machine could be ported to Webassembly running Geri's DawnOS

WebAssembly opens the door to all of the non-JavaScript developers to be able to write code to run in the web browser. (C#, C++, etc.)

Also, WebAssembly provides improved performance for low-level operations. So I would expect to see a lot more, and a lot better x86 virtual machines, and game console emulators showing up on the web in the near future.

Which means, among other things, that I'll finally be able to play NES games on my iPhone.

_________________Project: OZoneSource: GitHubCurrent Task: DOSBox Compatibility"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott

WebAssembly opens the door to all of the non-JavaScript developers to be able to write code to run in the web browser. (C#, C++, etc.)

I understand this, but that door was already open for more than 20 years. I'm only skeptical because JavaApplets had the same premise. You could write applets in Java, Ruby, Pascal or Scala for example, and yet nobody cared and JavaScript remained the main script language of the web for some reason.

SpyderTL wrote:

Also, WebAssembly provides improved performance for low-level operations. So I would expect to see a lot more, and a lot better x86 virtual machines, and game console emulators showing up on the web in the near future.

The same with JavaApplets. With JIT compilation they were running at almost native speed.

SpyderTL wrote:

Which means, among other things, that I'll finally be able to play NES games on my iPhone.

Ok, applets were never ported to mobile OSes, maybe Webassembly will be. TBH I don't know why, Android already had a JavaVM pre-installed, so could be technically done. I think it was more like a marketing decision.

I understand what they say about the benefits of Webassembly, it's just they have told exactly the same about JavaApplets. Put the marketing bullshit aside,- both are OpenSource standards,- both can be compiled from different languages,- both distributed as bytecode,- both running in a vm (probably JIT compiled),- both having access to hardware (like 3D acceleration),- both are embeddable on a webpage.So if the first one failed despite all of these advantages, then why won't the second fail too? You know what I mean? What does Webassembly have that JavaApplets don't, or, what is that JavaApplets lacked to be successful, and how does Webassembly address that hiatus?I'm afraid this leads us off-topic, maybe would be better to open a new topic perhaps?

I still think that it's a very interesting idea to experiment with an OS in Webassembly, regardless to the question will Webassembly be successful or not. I've suggested SUBLEQ because it has a minimal dependency, all provided by the browsers (storage, events and the canvas technology to be more precise).

The only real "technical" difference that I can tell between Java applets and WebAssembly is that WebAssembly defines code at the function level, and has no concept of classes. Java defines code at the class level. Other than that, there's not much difference. But maybe that is enough of a difference if you are a C developer, and you want to write code that runs on the client in a browser, and you don't want to create classes.

Also, browser developers seem to accept open, standardized technologies over proprietary ones. So maybe WebAssembly will succeed where Java Applets did not. Just because something works well doesn't necessarily guarantee that it will survive. I'm still a bit sore that HD-DVD didn't make it. It was a much better platform than BluRay, IMO.

_________________Project: OZoneSource: GitHubCurrent Task: DOSBox Compatibility"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott

As for the first, my friend says "browsers are operating systems, but instead of providing simple interfaces, they make everything complicated." I don't know if I entirely agree, but I have come to hate CSS.

Actually, WebAssembly doesn't have direct access to hardware. The best you could do with WebAssembly at this point would be to send requests from WASM to JavaScript, and have JavaScript pass those along to WebGL.

_________________Project: OZoneSource: GitHubCurrent Task: DOSBox Compatibility"The more they overthink the plumbing, the easier it is to stop up the drain." - Montgomery Scott

I have given myself a break for the moment, and only visit here occasionally for some insight, until I educate myself in some select areas. I stay off the radio usually, but decided to chime in here.

I was considering similar ideas for the hypothetical future. You see, I hope to some day experiment with a mostly managed OS. An OS needs interoperability with existing technologies and programming competences, which, for a non-POSIX platform, makes html5/javascript solution a natural choice, at least for the front end. However, since performance can be an issue and I envision managed system code as well (at least protocol/library/fs level), JavaScript is not very suitable. WebAssembly seems to be statically structured VM/language, designed to be abstracted from the platform on top of which it runs, which could make it appropriate for system-wide adoption. (I still haven't checked on its threading/memory model, synchronization, etc. I think some shared memory feature was abandoned in fear of Spectre/Meltdown related vulnerabilities, but even then it shouldn't be a problem to add as a host extension...)

Now, depending on what you want to accomplish, you could adopt JavaScript/WebAssembly as a standalone applications infrastructure, or you could use a client-server model, akin to Node.js<->Ajax web app. (Standalone and client-server are however not exclusive.) Do you want distributed applications, or locally run downloadable applets. Do you want separation of the platform specific parts into the backend or do you run OS specific code directly in the front end.

In any case, for plain html5/javascript applications at least, you could get immediate interoperability with other OS-es, and offer platform advantage (assuming long and productive life) through optional extensions and JIT performance tuning. The browser of your OS will require little development effort on top of your existing application infrastructure, although you will break your back implementing the latter anyhow. But what I consider important, is that you will have a roadmap to convert to a distributed, online delivery and service-oriented application model. Sadly, the days of classical desktop OSes are numbered. Even if it takes a couple of decades, clouds, desktop virtualization, demand content delivery and web services will gradually replace and subsume the current self-hosting, mostly offline systems.

If you keep the computational and presentational application aspects separated (the client server model), you can prime your OS to become globally distributed and web-centric. It may survive the initial attack of the cloud or even migrate into it, continuing to run on client hardware as the front end. Additionally, web developers and enthusiast scripting contributors can afford you third party support despite your relative anonymity.

Anyway. Take my opinion with a grain of salt. Sometimes I have idea what I am talking about and sometimes I seem to be having hallucinations.

@simeonz: Apart from the web stuff, this all reminds me of Plan 9, which was distributed in this sense in the '90s. For a while, it seemed unsuitable for the modern Internet, but I'm told it's got better again as latency has dropped.

simeonz wrote:

Anyway. Take my opinion with a grain of salt. Sometimes I have idea what I am talking about and sometimes I seem to be having hallucinations.

When I started this thread, I wasn't thinking about porting a browser environment (including HTML and WebGL) into your kernel. Instead, I was thinking about the possibility of using WebAssembly as your OS's bytecode format for distributing applications in.

At the very least, this would require implementing a WebAssembly VM/interpreter into your kernel, so you can run WebAssembly on bare metal.

A WebAssembly module is associated with a table of symbols it imports (your system calls) and exports (e.g. 'main', but you could have other entry points such as handlers). I'm fascinated with the idea of implementing the same set of the system calls in both a bare metal kernel, and inside of a web page. Then the same program, distributed in WebAssembly bytecode could run on both your kernel and in a browser, as long as they shared the same ABI.

I mentioned Canvases, not because I'd want to reimplement the Canvas spec in my bare metal kernel, but because web modern browsers provide a bitmaprenderer Canvas context, so you could display the pixels your low-level graphics API is pumping out. (The syscalls for a higher level graphics API could take advantage of hardware acceleration in the browser such as WebGL.)

I mentioned Web Workers, not because I'd want to reimplement Web Workers in my bare metal kernel, but because Web Workers would provide a way to implement multi-threading and process isolation inside the browser.

You could implement your filesystem IO syscalls around IndexedDB inside of a browser. Web Audio gives you an audio buffer you can write into.

But aren't the services provided (and required of) a bare-metal OS and a webbrowser so distinct as to render the intersecting set non-existent?

I admit I don't know much about WebAssembly, or web programming in general.

But let's take the I/O as an example. I can't imagine a webbrowser allowing a WebAssembly free access to the file system. So you'd have to "mock up" the filesystem as your bare-metal OS sees and handles it inside the webbrowser. I have a hard time seeing the usefulness of e.g. a database application running inside a browser on top of a filesystem mock-up running on e.g. a database backend running on the filesystem the webbrowser is running on... you see what I mean? On the other hand, many of the things necessary on the OS level -- filesystem defrag, concurrency, multi-user handling etc. -- just don't apply to what is happening in the browser.

Similar with graphics. Sure you could whip up some abstraction layer that makes 3D operations on a nVidia card driver look the same as writing to a browser canvas... but the added effort would be non-negligible for sure, I am not at all sure about the compromises you'd have to make, and what is the return-on-investment other than the novelty value?

Not saying that there isn't any. I just don't see it, which is why I am asking.

I am not at all sure about the compromises you'd have to make, and what is the return-on-investment other than the novelty value?

Not saying that there isn't any. I just don't see it, which is why I am asking.

The dream for me is to be in a state where I have an app store, and I can share you a link to a program I wrote in C++, you can click "Run", and the program near-immediately runs inside of the browser, and the binary is identical to what you could download and run off a disk on my bare metal kernel.

@simeonz: Apart from the web stuff, this all reminds me of Plan 9, which was distributed in this sense in the '90s. For a while, it seemed unsuitable for the modern Internet, but I'm told it's got better again as latency has dropped.

I'm sure that Plan 9 and Inferno are at least worth checking out. At least, they should offer a good starting point. On the other hand, to my mind, a modern distributed OS needs to offer location transparency, manual and automatic application and data caching, application versioning, etc. In the end, a distributed system could have too many tradeoffs to capture them all in a single OS.

Regarding the latency - I think it depends. For local networks, it is acceptable for some use cases. For global access, the latency is large enough to be a problem and may never be low enough without geodistribution. This is something that a cloud owning OS vendor can offer out of the box, but an amateur OS would not be able to. This means that the persistence layer may be served from the other side of the globe instead of local proxies, which means too many hops and even the electromagnetic latency accrues on top of that. A web-application improves upon the X window approach in a sense. If we split the typical application into a persistence, logic and presentation layer, an X server divides the presentation layer between the client and the server, whereas a web-app approach divides the logic layer, to run partly on the server and partly on the client. The latter allows the client to run independently for longer periods of time, interpolating, caching, etc. This helps to overcome the latency.

Who is online

Users browsing this forum: No registered users and 1 guest

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum