Samsung teams up with Mozilla to build browser engine for multicore machines

Servo browser engine uses Mozilla's new Rust language.

Mozilla today announced a collaboration with Samsung to produce a new browser engine designed to take full advantage of processors with multiple cores.

For the last couple of years, Mozilla Research has been developing a new programming language, Rust, that's designed to provide the same performance and power as C++, but without the same risk of bugs and security flaws, and with built-in mechanisms for exploiting multicore processors.

Using Rust, the company has been working on a prototype browser engine, named Servo.

Rust's safety features aim to eliminate many kinds of memory corruption bugs that are currently abundant in C++ programs, such as trying to write more data to buffers than the buffers can contain, or trying to use blocks of memory even after they've been deallocated. Unlike many other safe languages (such as C# and JavaScript), Rust still compiles to native code, so it should provide performance that's comparable to C++.

The language has built-in concurrency features to make it easier to produce both task-parallel programs (those that do different things on different data on different cores) and data-parallel programs (those that do the same thing to different data on different cores). These concurrency features mesh neatly with the safety features by making data sharing between threads more explicit.

Rust 0.6, released today, contains contributions from Samsung that port the language to ARM processors and the Android operating system.

Current browser engines are substantially single-threaded. Though some tasks such as decompressing images are starting to be done on separate processor cores, the bulk of the work of interpreting HTML and laying out pages is single-threaded. This limits the scalability of browser engines on the current proliferation of multicore processors. The ability to scale to four or more cores is increasingly attractive, with even smartphone processors containing lots of cores.

Browsers do allow certain kinds of parallel programming today. WebGL shader programs execute in parallel on GPUs, and Web Workers in JavaScript also allow parallel computation. With Servo, Mozilla is attempting to produce a browser engine that uses parallel processing for plain HTML and CSS too, ensuring that all developers—not just those producing WebGL/JavaScript programs—can take advantage of parallel processing.

As Mozilla learns more from its experience with Rust and Servo, this could then feed back into the designs of future versions of HTML and CSS to make them more amenable to this kind of engine.

Samsung, of course, has many fingers in many pies. It produces smartphones running Android and Windows Phone, it has developed its own smartphone operating system (Bada), and it is now folding that work into the HTML5-based Tizen, codeveloped with Intel. The company is keeping its options open.

On why it's contributing to Rust and Servo, a Samsung spokesman said, "As a leading mobile/TV manufacturer, Samsung is investigating various new technologies to innovate legacy products. This collaboration will bring an opportunity to open a new era of future web experience."

Servo is, at the moment, still a research project. We're not going to be running Servo-based browsers any time soon, and for the foreseeable future, Mozilla will be sticking with its C++ Gecko engine. But a few years from now, Servo, or something like it, could find itself at the core of the browsers we all use and depend on.

69 Reader Comments

Does rending HTML and CSS really require so much processing power that there are gains to be had from creating/using multi-threaded rendering engine?

On a touch UI better threading means a more responsive interface because the touch processing code is always going to be running out of sync with the actual rendering code.

I don't know if you've tried the chrome for Android development builds, but synchronization between threads, the touch interface, and what is being rendered is actually a serious problem in Chrome for instance. Finding better ways to solve this, or ideally building a new system around it could have a huge impact on user experience, even if total rendering time isn't reduced very much.

Does rending HTML and CSS really require so much processing power that there are gains to be had from creating/using multi-threaded rendering engine?

I mean, I'm all for improvements, but this seems like writing a multi-threaded "Hello World" app, to me.

Considering that CSS can specify various complex animations, 3D transforms, etc. nowadays, interpreting and rendering it is hardly equivalent to hello world. Things have changed a lot since the days of HTML 3.2.

HTML doesn't require much processing, but if you can break up the code into small chunks and spread the workload between multiple cores, then you exponentially reduce the time it takes to render the HTML. This can also open doors for future web technologies that may need to use multiple cores leading to an even more interactive internet. HTML5 will perform much better when browsers can utilize multiple cores for every piece of web code coming down the pipe.

Every major company has its own "C++, but not C++" language with bounds-checked arrays and concurrency support. I'm not sure we needed one more. Besides, it's going to take a significant development and engineering effort to port something as complex as the Gecko engine to a different language.

It's not fair to say that C# and Javascript aren't compiled to native code, because they both are. C# compiles to MSIL, and the Microsoft implementation saves a cached native version of every .NET application somewhere in C:\Windows. If you're not satisfied with it, you can also use the ngen tool to compile ahead-of-time a .NET program to a native image. For Javascript, all major browsers, with no exception, will compile Javascript to native code, albeit the dynamic nature of the language often makes it hard to have good optimizations (which is a problem asm.js, also pushed forward by Mozilla and recently showcased on Ars as well, tries to solve).

That said, if Mozilla thinks it can find a way to take advantage of multiple cores for parsing and rendering HTML web pages, then it's really cool. I'm just not sure they're taking the good road to it.

Rendering a hierarchical document structure is definitely something that can be parallelized. The lexer and parser build a tree, and threads may be spawned for each child node as the renderer traverses down the tree from the root node.

There are still some serialization requirements such as reflowing the layout around any float elements after the normal flow is completely rendered. But the bulk of the rendering process is suitable for threading.

Every major company has its own "C++, but not C++" language with bounds-checked arrays and concurrency support. I'm not sure we needed one more. Besides, it's going to take a significant development and engineering effort to port something as complex as the Gecko engine to a different language.

It's not fair to say that C# and Javascript aren't compiled to native code, because they both are. C# compiles to MSIL, and the Microsoft implementation saves a cached native version of every .NET application somewhere in C:\Windows. If you're not satisfied with it, you can also use the ngen tool to compile ahead-of-time a .NET program to a native image. For Javascript, all major browsers, with no exception, will compile Javascript to native code, albeit the dynamic nature of the language often makes it hard to have good optimizations (which is a problem asm.js, also pushed forward by Mozilla and recently showcased on Ars as well, tries to solve).

That said, if Mozilla thinks it can find a way to take advantage of multiple cores for parsing and rendering HTML web pages, then it's really cool. I'm just not sure they're taking the good road to it.

Question: Who are these companies you speak of? What are these languages? Google has made Go and Dart. Now please, tell me about these other major companies? I have no clue what you're talking about.

Also, even languages that get JITted like Javascript and C# still perform quite a bit slower than C++ code, and in some cases worse than half as fast. Rust is built on LLVM if I recall right, and so it gets fully optimized in the same way as C++ code and then emitted into native machine code. With JITted languages, they run a few shallow optimizations on the code and then let it loose, but Javascript and C# are both also what is known as "managed" code. This means that everything they do is double checked by another process to keep it in check, and then there are garbage collectors that come around every so often and clean things up. All of these things contribute to a notable loss in performance. With Rust, everything happens at compile time, and there is no ghost in the system watching after Rust processes.

Many many years ago I took a C++ language course taught by one of the developers of Apple's ill-fated Copeland OS. He spent a lot of the class teaching techniques to avoid all the many ways C++ can break. And when C++ breaks it is often because of accessing memory mistakenly de-allocated, causing crashes in code that has nothing to do with the actual bug, making debugging near impossible. I got the impression that a some of the Copeland problems were due to using C++ and the one thing I really got out of that class was to avoid ever using that languange. It scares me to think how many places that language is used.

Of course you can mess up just as easily in C and probably even Rust. But it seemed that C++ was especially prone to these types of problems.

Many many years ago I took a C++ language course taught by one of the developers of Apple's ill-fated Copeland OS. He spent a lot of the class teaching techniques to avoid all the many ways C++ can break. And when C++ breaks it is often because of accessing memory mistakenly de-allocated, causing crashes in code that has nothing to do with the actual bug, making debugging near impossible. I got the impression that a some of the Copeland problems were due to using C++ and the one thing I really got out of that class was to avoid ever using that languange. It scares me to think how many places that language is used.

Of course you can mess up just as easily in C and probably even Rust. But it seemed that C++ was especially prone to these types of problems.

Rust is (theoretically) designed to prevent these problems. We'll see how it turns out.

I'm more interested in the business implications of this. It seems like another hint that Samsung wants to own the various pieces of software that would allow it to replace Google on its smartphones at some point in the future (or at least have the option).

Also, even languages that get JITted like Javascript and C# still perform half as fast as C++ code, if not slower. Rust is built on LLVM if I recall right, and so it gets fully optimized in the same way as C++ code and then emitted into native machine code. With JITted languages, they run a few shallow optimizations on the code and then let it loose, but Javascript and C# are both what is known as "managed" code. This means that everything they do is double checked by another process to keep it in check, and then there are garbage collectors that come around every so often and clean things up. All of these things contribute to a notable loss in performance. With Rust, everything happens at compile time, and there is no ghost in the system watching after Rust processes.

You really don't know what you're talking about. Or are over simplifying to the point of being incorrect.

Yes, most current JITed languages do fewer optimisations at runtime than an optimising C++ compiler. But then again some do more (java's hotspot for example) that can actually do better (in the long run where throughput is you goal) because they are, in effect, always using profiler guided optimisation, not something many C++ programs take the time to do.

They also always get retargeted to the native machine they are running on, rather than the lowest common denominator.

Also using modern 'pointer bump' based allocation can make GC based mechanisms faster than the default allocators (possibly at the cost of more expensive stop the world pauses, though things like Azul's JVM's really are quite amazing in that regard).

They do always have a significant start up cost associated with them. That's really the first of two big reasons to use ngen, the other being increasing the number of possible shared private memory pages across multiple processes sharing the same code.

Saying they are "at least two times slower" is just stupid, they are empirically not (in some cases they can actually be faster, though that is rare I admit).

Also managed code does not mean "another process" checks things. They could be implemented that way, but it would be grotesquely slow. It means all interactions which might cause memory safety to be violated are[1] checked by the runtime unless explicit techniques that escape this safety net are used (and are authorised).Java has much the same concept with the JVM runtime (MS I believe coined the phrase, but most people I know use it to refer to any such environment these days)

1. technically <should> but one hopes it's very close to all or there's problems

Edit: editing the post I quoted after the event is very bad form, don't do it.There are plenty of examples in ars's past of me being wrong. Accept it and deal with it.

Question: Who are these companies you speak of? What are these languages? Google has made Go and Dart. Now please, tell me about these other major companies? I have no clue what you're talking about.

Google has Go and Dart, Microsoft has C# (and the whole CLR) as mentioned, Apple has Objective-C+libdispatch, Oracle has (acquired) Java. All of them were made with the claim that they're faster (or as fast) as C++, have checked arrays, and extensive concurrency support. Even C++11 has good concurrency support now, and most collections are checked.

coder543 wrote:

Also, even languages that get JITted like Javascript and C# still perform quite a bit slower than C++ code, and in some cases worse than half as fast. Rust is built on LLVM if I recall right, and so it gets fully optimized in the same way as C++ code and then emitted into native machine code. With JITted languages, they run a few shallow optimizations on the code and then let it loose, but Javascript and C# are both also what is known as "managed" code. This means that everything they do is double checked by another process to keep it in check, and then there are garbage collectors that come around every so often and clean things up. All of these things contribute to a notable loss in performance. With Rust, everything happens at compile time, and there is no ghost in the system watching after Rust processes.

As you acknowledge, this is all due to the fact that they're generally compiled just-of-time, and all these hurdles go away if you do it ahead-of-time. As my post mentions, you have ngen for CLR assemblies, which has all the time it needs to do its job as well as can be. It would also arguably be pretty easy to make an ahead-of-time Javascript compiler. The only language in that previous list that can't be compiled ahead-of-time is Java, because it heavily depends on the ability to load and unload classes at runtime.

Speaking of LLVM, the Webkit guys are experimenting with using it as a Javascript backend (I don't think it's fast enough for the scope of web pages, but I'll let them decide for themselves, and it could very well lead to a Javascript ahead-of-time compiler anyways, which is an idea I like), and Mono has been using LLVM for ahead-of-time compiling of .NET programs for quite a while now.

Rust's safety features aim to eliminate many kinds of memory corruption bugs that are currently abundant in C++ programs,

Sounds more like C-style bugs than C++-style bugs. Isn't Mozilla's energy better spent on something other than developing new programming languages?

They're C++-style bugs. C++ gives better tools for preventing these issues than C, certainly, but it is not immune to them.

If Rust uses any kind of bindings to C- or C++-based libraries, it certainly won't be. An awful lot of security vulnerabilities are exploiting bugs in video or image decompression libraries that browsers use rather than security vulnerabilities in the browsers themselves.

Does rending HTML and CSS really require so much processing power that there are gains to be had from creating/using multi-threaded rendering engine?

I mean, I'm all for improvements, but this seems like writing a multi-threaded "Hello World" app, to me.

If you think in 'classical' html i agree with you, but for heavy html5, ajax or something similar there would be gains in such approach. Mozilla of course dreams with an 'all html phone' and this could be a huge step towards this direction (if works well of course).

I just ported some multi-threaded c# code to Rust. It uses Rust tasks via spawn. I allocated 10000 tasks in Rust that looked values up in an xml database. I compared the read write times between Rust and C# code.

On linux Rust read 100 xml nodes per each spawned task in 50 secondsOn Linux using mono C# the same task using threads took 180 seconds

On windows 7 Rust read 100 xml nodes per each spawned task in 250 secondsOn windows using microsoft C# the same task using threads took about 67 seconds

It appears that MinGW requirement for Rust on windows makes Rust perform worse than the C# threaded version

On Linux it appears the mono C# implementation is the roadblock and the native Rust implementation is the best of them all.

Rust's safety features aim to eliminate many kinds of memory corruption bugs that are currently abundant in C++ programs,

Sounds more like C-style bugs than C++-style bugs. Isn't Mozilla's energy better spent on something other than developing new programming languages?

This is an effort by Mozilla Research. Languages and runtimes/engines are pretty much what they do.

Rust itself is a neat language in that it's memory safe and not garbage collected*. The compiler statically verifies the lifetime of all references at compile time. As a fan of functional languages (Clojure, F#) I like that it's immutable by default and that things like for loops are heavily syntax sugared higher ordered functions with closures as the body.

*You can opt for GC (currently reference counting) with @pointers but idiomatic rust code that I've seen seems to generally avoid it.

As someone coming from dynamic/functional languages, I feel like I fight with the type checker a lot when I'm writing toy programs in Rust. Part of it is borrow checker bugs and another is me not having patterns for how to solve flow patterns. I'm interested in seeing what 'normal' Rust code looks like once the language stabilizes since I've seen both C++ looking Rust and Haskell looking Rust.

Also, even languages that get JITted like Javascript and C# still perform half as fast as C++ code, if not slower. Rust is built on LLVM if I recall right, and so it gets fully optimized in the same way as C++ code and then emitted into native machine code. With JITted languages, they run a few shallow optimizations on the code and then let it loose, but Javascript and C# are both what is known as "managed" code. This means that everything they do is double checked by another process to keep it in check, and then there are garbage collectors that come around every so often and clean things up. All of these things contribute to a notable loss in performance. With Rust, everything happens at compile time, and there is no ghost in the system watching after Rust processes.

You really don't know what you're talking about. Or are over simplifying to the point of being incorrect.

Yes, most current JITed languages do fewer optimisations at runtime than an optimising C++ compiler. But then again some do more (java's hotspot for example) that can actually do better (in the long run where throughput is you goal) because they are, in effect, always using profiler guided optimisation, not something many C++ programs take the time to do.

They also always get retargeted to the native machine they are running on, rather than the lowest common denominator.

Also using modern 'pointer bump' based allocation can make GC based mechanisms faster than the default allocators (possibly at the cost of more expensive stop the world pauses, though things like Azul's JVM's really are quite amazing in that regard).

They do always have a significant start up cost associated with them. That's really the first of two big reasons to use ngen, the other being increasing the number of possible shared private memory pages across multiple processes sharing the same code.

Saying they are "at least two times slower" is just stupid, they are empirically not (in some cases they can actually be faster, though that is rare I admit).

Also managed code does not mean "another process" checks things. They could be implemented that way, but it would be grotesquely slow. It means all interactions which might cause memory safety to be violated are[1] checked by the runtime unless explicit techniques that escape this safety net are used (and are authorised).Java has much the same concept with the JVM runtime (MS I believe coined the phrase, but most people I know use it to refer to any such environment these days)

1. technically <should> but one hopes it's very close to all or there's problems

Edit: editing the post I quoted after the event is very bad form, don't do it.There are plenty of examples in ars's past of me being wrong. Accept it and deal with it.

I edited before you posted, but after you hit the quote button. It was not intentional. However, my point remains.

Question: Who are these companies you speak of? What are these languages? Google has made Go and Dart. Now please, tell me about these other major companies? I have no clue what you're talking about.

Google has Go and Dart, Microsoft has C# (and the whole CLR) as mentioned, Apple has Objective-C+libdispatch, Oracle has (acquired) Java. All of them were made with the claim that they're faster (or as fast) as C++, have checked arrays, and extensive concurrency support. Even C++11 has good concurrency support now, and most collections are checked.

coder543 wrote:

Also, even languages that get JITted like Javascript and C# still perform quite a bit slower than C++ code, and in some cases worse than half as fast. Rust is built on LLVM if I recall right, and so it gets fully optimized in the same way as C++ code and then emitted into native machine code. With JITted languages, they run a few shallow optimizations on the code and then let it loose, but Javascript and C# are both also what is known as "managed" code. This means that everything they do is double checked by another process to keep it in check, and then there are garbage collectors that come around every so often and clean things up. All of these things contribute to a notable loss in performance. With Rust, everything happens at compile time, and there is no ghost in the system watching after Rust processes.

As you acknowledge, this is all due to the fact that they're generally compiled just-of-time, and all these hurdles go away if you do it ahead-of-time. As my post mentions, you have ngen for CLR assemblies, which has all the time it needs to do its job as well as can be. It would also arguably be pretty easy to make an ahead-of-time Javascript compiler. The only language in that previous list that can't be compiled ahead-of-time is Java, because it heavily depends on the ability to load and unload classes at runtime.

Speaking of LLVM, the Webkit guys are experimenting with using it as a Javascript backend (I don't think it's fast enough for the scope of web pages, but I'll let them decide for themselves, and it could very well lead to a Javascript ahead-of-time compiler anyways, which is an idea I like), and Mono has been using LLVM for ahead-of-time compiling of .NET programs for quite a while now.

Google's languages are the only ones you mentioned that aren't a decade old or older. I'd love to see some benchmark's of Mono's AOT compile though... that would be an interesting discussion.

Rust's safety features aim to eliminate many kinds of memory corruption bugs that are currently abundant in C++ programs,

Sounds more like C-style bugs than C++-style bugs. Isn't Mozilla's energy better spent on something other than developing new programming languages?

They're C++-style bugs. C++ gives better tools for preventing these issues than C, certainly, but it is not immune to them.

To expand on this, Rust formalizes the implicit ownership and lifetime semantics of C++ to provide memory safety and prevent in-memory data races. It's possible to provide a similar level of robustness by taking the route of having everything boxed, using a garbage collector, and forcing data to be copied between threads, but they aren't willing to make a sacrifice in latency or performance.

It's built on existing technology (LLVM and libuv) with proven concepts from other languages, so it's not exactly a blue sky research project - it's a very directed effort to solve the problems they continue to encounter again and again in the millions of lines of code they maintain. I wouldn't expect Firefox to be written in it any time soon, but important components could be within a few years.

Google's languages are the only ones you mentioned that aren't a decade old or older. I'd love to see some benchmark's of Mono's AOT compile though... that would be an interesting discussion.

I don't think the age of a technology should be relevant here. For the Mono AOT backend, you can get more info here, and actually compiling stuff ahead-of-time is extremely easy if you want to give it a go.

Question: Who are these companies you speak of? What are these languages? Google has made Go and Dart. Now please, tell me about these other major companies? I have no clue what you're talking about.

Google has Go and Dart, Microsoft has C# (and the whole CLR) as mentioned, Apple has Objective-C+libdispatch, Oracle has (acquired) Java. All of them were made with the claim that they're faster (or as fast) as C++, have checked arrays, and extensive concurrency support. Even C++11 has good concurrency support now, and most collections are checked.

coder543 wrote:

Also, even languages that get JITted like Javascript and C# still perform quite a bit slower than C++ code, and in some cases worse than half as fast. Rust is built on LLVM if I recall right, and so it gets fully optimized in the same way as C++ code and then emitted into native machine code. With JITted languages, they run a few shallow optimizations on the code and then let it loose, but Javascript and C# are both also what is known as "managed" code. This means that everything they do is double checked by another process to keep it in check, and then there are garbage collectors that come around every so often and clean things up. All of these things contribute to a notable loss in performance. With Rust, everything happens at compile time, and there is no ghost in the system watching after Rust processes.

As you acknowledge, this is all due to the fact that they're generally compiled just-of-time, and all these hurdles go away if you do it ahead-of-time. As my post mentions, you have ngen for CLR assemblies, which has all the time it needs to do its job as well as can be. It would also arguably be pretty easy to make an ahead-of-time Javascript compiler. The only language in that previous list that can't be compiled ahead-of-time is Java, because it heavily depends on the ability to load and unload classes at runtime.

Speaking of LLVM, the Webkit guys are experimenting with using it as a Javascript backend (I don't think it's fast enough for the scope of web pages, but I'll let them decide for themselves, and it could very well lead to a Javascript ahead-of-time compiler anyways, which is an idea I like), and Mono has been using LLVM for ahead-of-time compiling of .NET programs for quite a while now.

Google's languages are the only ones you mentioned that aren't a decade old or older. I'd love to see some benchmark's of Mono's AOT compile though... that would be an interesting discussion.

Now you're just being intentionally a dick. The name Objective C may be old, but Objective C 2.0, which is the part that has the material of interest here (eg ARC) is, what, 5 years old or less.

As you acknowledge, this is all due to the fact that they're generally compiled just-of-time, and all these hurdles go away if you do it ahead-of-time. As my post mentions, you have ngen for CLR assemblies, which has all the time it needs to do its job as well as can be. It would also arguably be pretty easy to make an ahead-of-time Javascript compiler. The only language in that previous list that can't be compiled ahead-of-time is Java, because it heavily depends on the ability to load and unload classes at runtime.

Speaking of LLVM, the Webkit guys are experimenting with using it as a Javascript backend (I don't think it's fast enough for the scope of web pages, but I'll let them decide for themselves, and it could very well lead to a Javascript ahead-of-time compiler anyways, which is an idea I like), and Mono has been using LLVM for ahead-of-time compiling of .NET programs for quite a while now.

No, it would not "arguably be pretty easy to make an ahead-of-time Javascript compiler". Have you actually looked at what it takes to compile Javascript? The language is a dynamic nightmare (as many scripting languages are). If someone calls eval() on a variable, your AOT compiler is sunk unless it links in a JIT compiler with the executable or something. But wait, there's more! You can attach methods and properties to an object in JS at runtime, so it is not trivial to determine ahead-of-time what foo.bar will resolve to (Does foo have a bar? If so, is it a string, an int, a float, what? Maybe it's a method!).

It's also not a suitable allocation of effort, if it did make sense to try. Software engineers that work on JS engines have made them so stupendously fast that it's possible to have JS that runs only a factor of two slower than native: http://kripken.github.com/mloc_emscripten_talk/#/27. The only downside, of course, is that for that sort of speedup you need to compile C++ to a form of JS which is essentially assembler, and the sort of JS code that is that fast certainly is not idiomatic, not to mention that you've ended up back at C++.

Rust's safety features aim to eliminate many kinds of memory corruption bugs that are currently abundant in C++ programs,

Sounds more like C-style bugs than C++-style bugs. Isn't Mozilla's energy better spent on something other than developing new programming languages?

They're C++-style bugs. C++ gives better tools for preventing these issues than C, certainly, but it is not immune to them.

If Rust uses any kind of bindings to C- or C++-based libraries, it certainly won't be. An awful lot of security vulnerabilities are exploiting bugs in video or image decompression libraries that browsers use rather than security vulnerabilities in the browsers themselves.

While what you say is true, I believe the goal is to write as much as feasible in Rust... eventually.

In addition, we'd likethe components of Servo to be reusable for general Rust apps. Forexample, networking, URL parsing, and layers are useful functionalityfor a variety of applications, and we can benefit from communitycontributions to these components when and if Rust becomes widely used.

Where possible, we reuse existing C and C++ libraries. This doesn'tpreclude the possibility of rewriting the functionality provided bythese libraries in safe and parallel Rust code, but it helps us get offthe ground.

Which 2d graphics, image decoding, and other libraries will be used in the meantime are discussed somewhat in the post. As to where they will draw the line remains to be seen, but they can certainly reduce the attack surface.

"the same performance and power as C++, but without the same risk of bugs and security flaw" - which is it? The power of C++ is what allows bugs.

What is insane is yet another programming language. As if C, C++, Objective-C, Java, C#, Perl, Python, Ruby, PHP, Lua, REXX, Haskell, Clojure, Groovy, Jython, JavaScript, Go, Dart, etc are not enough. There are an insane number of programming languages right now, and the Balkanization of the software development world is getting ridiculous.

Rust's safety features aim to eliminate many kinds of memory corruption bugs that are currently abundant in C++ programs,

Sounds more like C-style bugs than C++-style bugs. Isn't Mozilla's energy better spent on something other than developing new programming languages?

More C++ style bugs than C bugs.Memory management isn't really all that complicated for procedural programs. It's the object-oriented programming style that makes memory management such a complicated mess.

It's the object-oriented programming style that enables programs to scale up beyond trivial toy applications. Take a look at any huge, complicated C program (such as GCC or the Linux kernel) and you'll see they've created their own object system, because it's how you scale up.

And then you end up in an even worse situation, because you end up with C++-in-C but with huge amounts of boilerplate code that's easy to mess up.

More C++ style bugs than C bugs.Memory management isn't really all that complicated for procedural programs. It's the object-oriented programming style that makes memory management such a complicated mess.

C++ isn't equivalent to OOP and C isn't equivalent to no-OOP. A FILE* is essentially a pointer to a FILE object, with fopen being the constructor and fclose being the destructor.Whether you use objects depends on the problem you're trying to solve, not (just) on the language you use.