I can’t run the benchmark in Internet Explorer 11 ( Inori version ), the WebGL player return an Out of memory error.

On Linux, without proprietary drivers and running Firefox, the benchmark «runs», I mean, the whole screen is messed up, but at least it runs. With proprietary drivers on, I got a wonderfull pinkish bubblegum screen, and nothing works.

Glad to see some benchmarks around this. I was especially surprised to see how the different browsers performed on my own hardware. I definitely get different results on Windows 10. I actually get Edge>Firefox>Chrome. Not entirely sure why though.

As far as performance, its getting to «good enough» for many things. I won’t be trying to push AAA content directly in the browser, but for things like social games and apps — its definitely pretty close to where I don’t care so much about the benchmarks anymore.

The biggest point of these benchmarks is to see differences in performance of the browser’s JavaScript engines. These should be fairly consistent between different OSes. The only reason we tested both OS X and Windows in this benchmark was that we wanted to show how Safari compares to the other browsers. Some of the differences in results may also come from differences in WebGL implementations — OS X and Linux should be similar there, as both are based on OpenGL, and underlying GPU drivers (but showing the differences between GPU drivers is not the point of this benchmark).

The results published in the blog are basically useless, because of being normalized. Since they don’t give the numbers in a way that I can use to compare my own results to the published results, it is all meaningless. Even after you run it yourself, you get no comparative information, and the types of things tested are very different from each other. Presumably there is some sort of typical reason common to published benchmarks why this would be concealed. ;)

The reason is not about concealing anything, but simply about making all the benchmarks fit in a single chart, as the numbers for each of them are vastly different. The purpose of this blog post is to show relative differences between different browsers on the same hardware, to show how different JavaScript VMs and WebGL implementations compete. For this purpose, absolute numbers are not needed.

I took a look at webgl export in 5.3 .. safe to say i won’t be bothering with it, the output scene lighting was completely off from the web standalone, and performance of the scene even after changing settings to be as low as possible was just not worth it.

And the webgl compile time turned my pc into barely useable brick for way too long, no idea who’s bright idea it was to set all compiling processes thread priority onto anything but ‘below normal’ kinda stupid given there is no setting in unity to set the thread priority before it starts, probably the same programmer who set the lighting building processes to the same.

oh and to top it off the webgl version left a 70mb opengl.js with the total size for webgl export being 82mb. Unity provides no way of seeing what the hell its exported out. For comparison the web standalone export was 3mb.

Maybe in 2017 webgl might be worth checking out again right now its bleh, ofc UE current track of releasing decent updates with actual built in engine features as opposed go find and buy such improvements at the asset store might have me switching to that in 2016 instead.

«Will Shared Array Buffers be mapped to C# threads so our own multi-threaded code can take advantage of them via IL2CPP?»

Initially, no. Arbitrary C# threads are harder to allow because we cannot walk the stacks in JavaScript to perform garbage collection, so we can only allow threads in a controlled context where we know when we can safely assume no GC objects are referenced on any thread’s stack at certain times.

What exactly are those controlled contexts and certain times if I may ask? Does not referencing GC objects mean we can only use local value types in threads? For example, what would not work in a multithreaded WebGL environment that would work in a multithreaded standalone?

Basically, with the current setup, it is not possible to have any reference to a managed object (ie a GC handle) on any stack when GC takes place (which is currently once at the end of any frame). So managed threads which run longer then the duration of a frame would not be possible (since this would probably rule out most use cases people are interested in, we would not enable managed threads at all, before we can solve this problem somehow).

I disagree with the decision to omit standalone results from the comparison.

It’s important to understand the performance hit you’re going to take by choosing WebGL over standalone. Which areas are significantly weaker? Which are nearly comparable?

Further, performance is increasing over time on standalone as well and it’s valuable to compare the changes on each platform. If performance increases on standalone are outpacing those of WebGL, that means the gap between the platforms is increasing even in the face of the WebGL improvements and will affect the decision to leverage the platform. Likewise, if the gap is closing, that makes a stronger case for WebGL.

I have to wonder if the comparison was omitted because it potentially paints WebGL in a more negative light, but imho the comparison is crucial to making an informed decision. Sure, we can do the benchmarks ourselves, but why omit the information when you’re already publishing results publicly?

Noticeable points:
-Scripting benchmarks are actually faster in WebGL then in native. This is due to different scripting backends used (il2cpp vs mono)
-Benchmarks which are mostly rendering bound (Asteroid Field, Particles) perform very close to native.
-Benchmarks which benefit from multithreading a lot (physics, skinning) are significantly faster in native code.

Overall not much has changed in these findings since last year. No surprises there, as the constraints have not really moved (other then some browsers like Edge catching up on performance). This will change when technologies like Shared Array Buffers will be available.

I agree there is room to interpret exactly where the performance differences come from. However, it’s not as though you can choose to build for «WebGL using the standalone code paths». Regardless of the sources of the differences, the fact is they are the inherent cost of choosing to target WebGL. As such, the absolute differences are the more salient information, rather than the precise reason they exist.

Unity WebGL runs on mobile, but currently the results are only usable on very high-end devices (depending on your content), so we cannot recommend it. The other engines you named are not really comparable to Unity in terms of functionality, and have a much smaller foot print in code size, which makes them a better fit for today’s mobiles. I expect technology to catch up with this in the future, both in the form of faster mobile devices with more memory, and better performance from browsers and technologies such as WebAssembly.

Unity 5.4 should give you an option for much faster development builds. We are also working on a Build Report feature which will get you detailed information on build size and where the size comes from (currently planned for 5.5).