To celebrate the 2.3 version of Babylon.js, the team decided to build a new cool demo of what can be done with our WebGL engine but also to show how HTML5 web standards can build great games today. The idea was to build a similar, if not identical, experience on all WebGL supported platforms and to try to reach native apps’ features.

This article will then explain how this experience works and the various challenges we’ve faced to build it.

For that, this demo is using a fairly impressive number of “HTML5 features” like WebGL, Web Audio of course but also Pointer Events (available everywhere thanks to jQuery PEP polyfill), Gamepad API, IndexedDB, HTML5 appcache, CSS3 transition/animation, flexbox and fullscreen API.

Discovering the demo

First, you’ll start on an auto-animated sequence giving the credits to who built this scene.

Most of the team’s members come from the “demo scene”. If you don’t know what it is, have a look to this site or go on Wikipedia. You’ll discover that this is an important part of the 3D developers’ culture. On my side, I was on Atari while David Catuhe was on Amiga which is still a regular source of conflicts between us ;-). I was coding a bit but mainly doing the music in my demo group. I was a huge fan of Future Crew and more specifically of Purple Motion, my favorite demo scene composer of all time.

For Sponza, here are the contributors:

Michel Rousseau aka “Mitch” has done the awesome visual, animations & rendering optimizations acting as the 3D artist. It took the great Sponza model provided freely by Crytek on their website and used our 3DS Max exporter to generate what you see.

David Catuhe aka “deltakosh” and I have done the core part of the Babylon.js engine and also all the code for this demo (custom loader, special effects for the demo mode using fade to black post-processes, etc.) as well as a new type of camera named “UniversalCamera” handling all type of inputs in a generic way.

Julien Moreau-Mathis helped us by building a new tool to help 3D artists finalizing the job between the modeling tools (3DS Max, Blender) and the final results. For instance, Michel has used it to test & tune the various animated cameras and to inject the particles into the scene.

If you wait until the end of the automatic sequence up to the epic finish (my favorite part, you should check it), you’ll be automatically switched to the free interactive mode. If you’d like to by-pass the demo mode, simply click on the camera icon or press A on your gamepad.

In the interactive mode, if you’re on a PC/Mac, you’ll be able to move inside the scene using keyboard/mouse like a FPS game, if you’re on smartphone, you’ll be able to move using a single touch (and 2 to rotate camera) and finally, on an Xbox One using the gamepad (or a desktop if you’re plugging a gamepad into it).

Fun fact: on a Windows touch PC, you can potentially use the 3 types of input in the same time.

The atmosphere is different in the interactive mode. You’ve got 3 storms sounds randomly positioned in the 3D environment, some wind and some small fire sounds on each corner. On supported browsers (Firefox, Chrome, Opera & Safari), you can even switch between the normal speaker mode and the headphone mode clicking on the dedicated icon. This will then use the binaural audio rendering of Web Audio for a more realistic audio simulation if you’re listening via a headphone.

To have a complete app-like experience, we’ve generated icons & tiles for all platforms. This means for instance on Windows 8/10 that you can pin this web app into the start menu. We even have multiple sizes available:

Same on your iPhone, Windows Mobile or Android device:

Offline first!

Once the demo has been completely loaded, you can switch your phone to airplane mode to cut connectivity and click on the Sponza icon. Our web app will still provide the complete experience with WebGL rendering, 3D web audio and touch support. Switch it to full screen and you won’t be able to make the difference with a native app!

Xbox One enjoys the show also

Last but not least, the very same demo works flawlessly in MS Edge on your Xbox One:

Press A to switch to interactivemode. The Xbox One says that you can now move using your gamepad inside the 3D scene:

So, let’s briefly recap.

The very same code base works across Windows, Mac, Linux on MS Edge, Chrome, Firefox, Opera & Safari, on iPhone/iPad, on Android devices with Chrome or Firefox, Firefox OS and on Xbox One! Isn’t that cool? Being able to target so many devices with a native like experience directly from your web server?

Hack the scene with our debug layer to learn how we’ve made it

If you’d like to understand how Michel is mastering the magic of 3D modeling, you can hack the scene using the Babylon.js Debug Layer tool.

To enable it on a machine with a keyboard, press “CTRL + SHIFT + D” and if you’re using a gamepad on PC or Xbox, press on “Y”.

Note:displaying the debug layer costs a bit of performance due to the composition job the rendering engine needs to do. So the FPS displayed are a bit less important than the real FPS you’ve got without the debug layer displayed.

Let’s test it on a PC for instance.

Move near the lion’s head and cut the bump channel from our shader’s pipeline:

You should see that the head is now less realistic. Play with the other channel to check what’s going on.

You can also cut the dynamic lightning engine or disable the collisions engine to fly or move through the walls. For instance, disable the “collisions” checkbox and fly to the first floor. Put the camera in front of the red flags. You can see them slightly moving. Michel has used the bones/skeletons support of Babylon.js to move them. Disable the “skeletons” option and they should stop moving:

At last, you can display the meshes tree on the top right corner. You can enable/disable them to completely break the great job done by Michel:

Removing the geometries, our über shader’s channels or some options of our engine can help you troubleshooting performances on a specific device to see what is currently costing too much. You can also check if you’re CPU limited or GPU limited. Even if, most of the time, we’re CPU limited in WebGL due to the mono-threading nature of JavaScript. Finally, this tool is also very useful to learn how a scene has been built by the 3D artist.

By the way, it works great on Xbox One also:

Challenges

We faced a number of questions & challenges to build this experience. Let me share some of them.

WebGL performance and cross-platforms compatibility

On the programming side, this one was probably the easiest one as it’s completely handled by the Babylon.js engine itself.

We’re using a unique shader’s architecture that adapts itself to the platform by trying the find the best shader available for the current browser/GPU using various fallbacks. The idea is to lower the quality/complexity of the rendering engine until we manage to display something on screen.

Babylon.js is mainly based on WebGL 1.0 to guarantee that the 3D experiences built on top of it will run everywhere. It has been built with a web philosophy in mind, so we’re using a “graceful degradation” like approach on the shader compilation process. This is completely transparent for the 3D artist who doesn’t want to understand those complexities most of the time.

Still, the 3D artist has a very important role in the performance optimizations. He has to know the platform he’s targeting, its features supported and its limitations. You can’t take assets coming from AAA games made for high end GPUs and DirectX 12 and push them like that on a WebGL engine. Targeting WebGL is today very similar to the job you’ll have to do to optimize for mobile, plus the fact that JavaScript is highly mono-threaded.

Mitch is extremely good doing that: optimizing the textures, pre-calculating the lightning to bake it into the textures, reducing as much as possible the number of draw calls, etc. He’s got years of experience behind him and saw the various generations of 3D hardware & engines (from PowerVR/3DFX to today’s GPUs) which really helps.

The initial goal for Sponza was to build 2 scenes. 1 for the desktop and 1 for the mobile with less complexity, smaller textures, simpler meshes/geometries. But during our tests, we finally discovered that the desktop version was also running pretty fine on mobiles as it can run up to 60fps on an iPhone 6s or an Android OnePlus 2. We then decided not to continue working on the simpler mobile version. But again, following the general web best practices for web games is a good idea. It would have been probably better to have a pure “mobile first” approach on Sponza to reach 30fps+ on most mobiles and then enhance the scene for high-end mobiles and desktop. Still, most of the feedback we have on twitter seems to indicate that the final result works very well on a lot of mobiles. Sponza has been optimized on a HD4000 GPU (Intel Core i5 integrated) which is more or less equivalent to actual high-end mobiles’ GPUs.

We’re then pretty happy of the performance. Sponza is using our über shader with ambient, diffuse, bump, specular & reflection enabled, we have some particles to simulate small fires on each corner, we’ve got animated bones for the red flags, 3D positioned sounds and collisions when you’re moving using the interactive mode. Technically speaking, we’ve got 98 meshes used in the scene generating up to 377781 vertices, 16 actives bones, 60+ particles that could generate up to 36 draw calls. And trust me, having few draw calls is the key to optimal performance, even more on the web.

The loader

For Sponza, we wanted to create a new loader, different than the default one we’re using on the babylonjs.com website, to really have a polished web app. I’ve then asked to Michel to suggest something new.

He first provided me that:

Which looks very nice when you first have a look to it!

But then, you’re quickly asking to yourself: “how am I going to make this working on all devices in a responsive way?”. Well, you probably perfectly understand me.

Let’s talk first about the background. The effect created by Michel was nice but wasn’t working well across all window sizes & resolutions generating some moiré. I’ve then changed it with a classical screenshot of the scene.

I then wanted the background to completely fill the screen without black bars and without stretching the image to break the ratio.

We then have the experience I was looking to, whatever the screen ratio being used:

The other parts are using classical percentage-based CSS positioning. But my main wonder was on how to handle the size of the fonts based on the size of the viewport.

Well, the good news is that you’ve got a specific unit for that! It’s named “vw” or “vh” where 1vw is 1% of the viewport width for instance. This CSS property is doing all the magic for you and is pretty well supported across browsers. Well, at least, all WebGL-compatible browsers are supporting it and that was more than enough for me.

Finally, we’re playing with the opacity property of the background image to move it from 0 to 1 based on the current downloading process moving from 0 to 100%.

Handling all inputs in a transparent way for the users

Our WebGL engine is doing all the job on the rendering side to display the good stuff on all platforms. But how to guarantee that the user will be able to move inside the scene whatever the input type used?

We were currently supporting all types of input and user’s interactions in Babylon.js v2.2: keyboard/mouse, touch, virtual touch joysticks, gamepad, device orientation VR (for Card Board) and WebVR, each via a dedicated camera. You can read our documentation to know more about them: http://doc.babylonjs.com/tutorials/05._Cameras

The idea for Sponza was to have a unique camera handling all users’ scenarios possible: desktop, mobile & console.

We ended up creating the UniversalCamera. It was so obvious and simple to create that I still don’t know why we didn’t do it before. The UniversalCamera is more or less a gamepad camera extending the TouchCamera which extends the FreeCamera.

The FreeCamera is providing the keyboard/mouse logic, the TouchCamera is providing the touch logic and the final extend the gamepad logic.

This UniversalCamera is now used by default by Babylon.js. If you’re navigating on the http://babylonjs.com demos, you can move inside the scenes using mouse, touch & gamepad on all of them now. To know more on how we’ve built it, read our code: babylon.universalCamera.ts

Synchronizing the transitions with the music

This is where we’ve asked ourselves most of the questions. You may have noticed that the introduction sequence is synchronized with specific moments of the music. The first texts are displayed with some of the drums I’ve used and the final ending sequence is switching quickly from one camera to another on each note of the horn instrument I’m using.

Synchronizing audio with the WebGL rendering loop is not easy. Again, this is the mono-threaded nature of JavaScript that generates this complexity. To better understand why, please read the first part of this article: Introduction to the HTML5 Web Workers: the JavaScript multithreading approach. It’s really important to understand the problem highlighted by the first diagram to understand the global issue we’re facing.

Usually, in demo scenes (and video games I guess), if you’d like to synchronize the visuals with the sounds/music, you’re going to be driven by the audio stack. 2 approaches are used:

1 – Generate some metadata injected into the audio files that could raise some events when you’re reaching a specific part of it2 – Real-time analyzing the audio stream via FFT or similar to detect interesting peaks or BPM changes that would again generate events for the visual engine.

Those approaches particularly work well in multi-threaded environments like C++. But in JavaScript with Web Audio, we’ve got 2 problems:

1 – JavaScript, which is mono-threaded and unfortunately, most of the time, web workers won’t help2 – Web Audio doesn’t have any events that could be sent back to the UI thread even if Web Audio is being handled on a separate thread by the browser.

Web Audio has a much precised timer than JS. It would have been cool to be able to use this separate timer on a separate thread to drive the events back to the UI thread. But today, you can’t do that.

On the other side, we’re rendering the scene using WebGL and the requestAnimationFrame function. This means that, in “best cases”, we have 16ms windows timeframe. If you’re missing one, you’ll have to wait up to 16ms to be able to act on the next frame to reflect the sound synchronization (for instance to launch a fade to black effect).

I was then thinking about injecting the synchronization logic into the requestAnimationFrame loop. Checking the time spent since the beginning of the sequence and checking if I should impact the visual to react on an audio event. The good news is that web audio will render the sound whatever is going on in the main UI thread. For instance, you can be sure that the 12s timestamp of the music will arrive exactly 12s after the music have started even if the GPU is having a hard time rendering the scene.

But we finally chose the simplest solution ever: using setTimeout() calls…

Yes, I know. The above article on the introduction to web workers says that this is unreliable. But in our case, once the scene is ready to be rendered, we know that we’ve downloaded all our resources (textures & sounds) and compiled our shaders. We shouldn’t be too much annoyed by unexpected events saturating the UI thread. GC could be a problem but we’ve also spent a lot of time fighting against it in the engine: Reducing the pressure on the garbage collector by using the F12 developer bar of Internet Explorer 11

Still, I know this is far from being an ideal solution. Switching to another tab or locking your phone and unlocking it some seconds later could generate some issues on the synchronization parts of our demos. We could fix those problems by using the Page Visibility API by pausing the rendering loop, the various sounds and re-computing the next timeframes for the setTimeout() calls.

I’ve got the feeling that I’ve missed something, that there is probably a better approach to handle this problem. So feel free to share your ideas or solutions in the comment section if you think you’ve got something better!

Oh, by the way, the text animations are simply done using CSS3 transitions or animations combined with a flexbox layout to have a simple but efficient way to display at the center or on each corner using alignItems to “center”, “flex-start” or “flex-end” and justifyContent to “center”, “flex-start” and “flex-end” also.

Handling Web Audio on iOS

The last challenge I’d like to share with you is the way Web Audio is being handled by iOS on iPhone & iPad. If you’re looking in search engines for “web audio + iOS”, you’ll find lot of people having hard time to play sounds on iOS.

iOS is perfectly supporting Web Audio, even the binaural audio mode. But Apple has decided that a web page can’t play any sound by default without a specific user’s interaction. It’s probably to avoid having ads or whatever disturbing the user by playing unsolicited sounds.

You then first need to unlock the web audio context of iOS after a user’s touch before trying to play any sound. Otherwise, your web application will remain desperately mute.

Unfortunately, the only way I’ve found to do this check if by doing a user platform sniffing approach as I didn’t found a feature detection way to do it. I really hate doing that but sometimes, this is the only solution you’ve got…

If you’re not on iPad/iPhone/iPod, the audio context is immediately available for use. Otherwise, we’re unlocking the audio context of iOS by playing a code generated empty sound on the touchend event. You can register to the onAudioUnlocked event if you’d like to wait for that before launching your game.

So if you’re launching Sponza on an iPhone/iPad, you’ll have this final screen at the end of the loading sequence:

Touching anywhere on the screen will unlock the audio stack of iOS and will launch the show.

I hope that you’ve enjoyed our Sponza by Babylon.js experience. I also hope that you’ve learned interesting details in this article. To learn more, read the complete source code of this demo on our github. Main files to review are: index.js and babylon.demo.ts

Finally, I really hope you’ll be now even more convinced that the web is definitely a great platform for gaming!

Stay tuned, we’ve got great new demos coming soon and trust me, they are very very cool.

Starting yesterday, a new update is available on Xbox One. I really love the new dashboard and the Xbox 360 backward compatibility as a gamer. But as a web developer, I’m more than happy to now have Microsoft Edge running on my console! This means that you can now run very modern content inside the Xbox One browser!

Testing your HTML5 content on Xbox One / MS Edge

It supports for instance WebGL, Web Audio, Gamepad API as you can see in this video:

We’re using the Babylon.js Mansion demo on the Xbox One. It runs perfectly fine! I’m still amazed by the beauty of the web standards. If you’re following best practices, your code will run everywhere! Even better, I’m using the Xbox Windows Store app in this video to stream the video output of my Xbox One on my Windows 10 PC. This mean that I can test my web content on my Xbox One without leaving my chair

Thanks to the Xbox Windows Store app, you can remotely test the MS Edge Xbox One browser from your Windows 10 PC!

Remote debugging your HTML5 content on Xbox One thanks to Vorlon.js

Vorlon.js is an open-source cross-platforms remote debugging tool that has been made to easily remote debug any web page from any device. Let’s check that together in the following video:

By simply adding a single script reference at the beginning of your HTML page, you can remote debug your site from any browser using the Vorlon.js dashboard. In this video, I’m checking the support for the Gamepad API using the Modernizr plug-in, using the interactive console to check for potential errors and to execute JavaScript on the Xbox One and finally the DOM Explorer to update the HTML and some Flexbox properties. Again, using the Xbox Windows Store app is very useful to live debug the page. Simply snap it on the left and snap the Vorlon.js dashboard on the right. Even better, use multiple screens on your Windows 10 PC:

I’m definitely more than convinced that Web Standards and WebGL offer great new possibilities for the gaming industry! I’ve talked about this in a previous article: The web: the next game frontier? It seems to really start to be true.

Today, thanks to the power of the Web Audio API, you can create immersive audio experiences directly in the browser. No need for any plug-in to create amazing experiences. I’d like to share with you what I’ve learned while building the audio engine of our Babylon.js open-source gaming engine.

Web Audio in a nutshell

If you ever tried to do something else than streaming some sounds or music using the HTML5 audio element, you know how limited it was. Web Audio allows you to break all the barriers and provides you access to the complete streamand audio pipeline like in any modern native application. It works thanks to an audio routing graph made of audio nodes. This gives you precise control over time, filters, gains, analyzer, convolver and 3D spatialization.

It’s being widely supported now (MS Edge, Chrome, Opera, Firefox, Safari, iOS, Android, FirefoxOS and Windows Mobile 10). In MS Edge (but I’m guessing in other browsers too), it’s been rendered in a separate thread than the main JS thread. This means that almost all the time, it will have few if no performance impact on your app or game. The codecs supported are at least WAV and MP3 by all browsers and some of them also support OGG. MS Edge even supports the multi-channel Dolby Digital Plus™ audio format!! You then need to pay attention to that to build a web app that will run everywhere and provides potentially multiple sources for all browsers.

Audio routing graph explained

Note:this picture has been built using the graph displayed into the awesome Web Audio tab of the Firefox DevTools. I love this tool. So much that I’ve planned to mimic it via a Vorlon.js plug-in. I’ve started working on it but it’s still very early draft.

Let’s have a look to this diagram. Every node can have something as an input and be connected to the input of another node. In this case, we can see that MP3 files act as the source of the AudioBufferSource node, connected to a Panner node (which provides spatialization effects), connected to Gain nodes (volume), connected to an Analyser node (to have access to frequencies of the sounds in real-time) finally connected to the AudioDestination node (your speakers). You see also you can control the volume (gain) of a specific sound or several of them at the same time. For instance, in this case, I’ve got a “global volume” being handled via the final gain node just before the destination and some kind of tracks’ volume with some gains node placed just before the analyser node.

I’d like to play 2 synchronized music that will each have its own gain connected to their own analyser. This means that we’ll be able to display the frequencies of each music on separate canvas and play with the volume of each one separately also. Finally, we will have a global volume that will act on both music at the same time and a third analyser that will display the frequencies on both music cumulated on a third canvas.

Simply have a look to the source code using your favorite browser and it’s F12 tool.

sound.js is abstracting the logic to load a sound, decode it and play it. analyser.js is abstracting the job to do to use the Web Audio analyser and display frequencies on a 2D canvas. Finally, analysersSample.js is using both files to create the above result.

Play with the sliders to act on the volume on each track or on the global volume to review what’s going on. For instance, try to understand if you’re putting the global volume to 0, why can you still see the 2 analysers showing data on the right?

To be able to play the 2 tracks synchronized, you just need to load them via a XHR2 request, decode the sound and as soon as those 2 asynchronous operations are done, simply call the Web Audio play function on each of them.

Web Audio in Babylon.js

Basics

I wanted to keep our “simple & powerful” philosophy while working on the audio stack. Even if Web Audio is not that complicated, it’s still very verbose to me and you’re quickly doing boring and repetitive tasks to build your audio graph. My objective was then to encapsulate that to provide a very simple abstraction layer and to let people create 2D or 3D sounds without being a sound engineer.

First sound will be played automatically in loop as soon as it will be loaded from the web server and decoded by Web Audio.

Second sound will be played once thanks to the callback function that will trigger the play() function on it once decoded and ready to be played. It will be played also at a 2x playback rate.

The last sound is using the “streaming: true” option. It means we’ll use the HTML5 Audio element instead of doing a XHR2 request and using a Web Audio AudioBufferSource object. It could be useful if you’d like to stream a music rather than waiting to download it completely. Still, you can apply all the magic of Web Audio on top of this HTML5 Audio element: analyser, 3D spatialization and so on.

Engine, Tracks and memory foot print

The Web Audio audio context is only created at the very last moment. When the Babylon.js audio engine is created in the Babylon.js main engine constructor, it only checks for Web Audio support and set a flag. Also, every sound is living inside a sound track. By default, the sound will be added into the main sound track attached to the scene.

The Web Audio context and the main track (containing its own gain node attached to the global volume) are really created only after the very first call to new BABYLON.Sound().

We’ve done that to avoid creating some audio resources for scenes that won’t use audio, which is today the case of most of our scenes hosted on http://www.babylonjs.com. Still, the audio engine is ready to be called at any time. This helps us also to detach the audio engine from the core game engine if we’d like to build a core version of Babylon.js without any audio code inside it to make it more lightweight. At last, we’re currently working with big studios that pay of lot of attention to resources being used.

To better understand that, we’re going to use Firefox and its Web Audio tab.

The audio context has been dynamically created after the first call to new BABYLON.Sound() as well as the main volume, the main track volume and the volume associated to this sound.

3D Sounds

Basics

Web Audio is highly inspired by models used in libraries such as OpenAL (Open Audio Library) for 3D. Most of the 3D complexity will then be handled for you by Web Audio. It will handle the 3D position of the sounds, their direction and their velocity (to apply a Doppler effect).

As a developer, you then need to ask to Web Audio to position your sound in the 3D space and to set and/or update the position and orientation of the Web Audio listener (your virtual ears) using the setPosition() and setOrientation() function. But if you’re not a 3D guru, it could be a bit complex. That’s why, we’re doing that for you in Babylon.js. We’re updating the Web Audio listener with the position of our camera in the 3D world. It’s like some virtual ears (the listener) were stick onto some virtual eyes (the camera).

Up to now, we’ve been using 2D sound. To transform a sound into a spatial sound, you need to specify that via the options:

– distanceModel (the attenuation) is using a “linear” equation by default. Other options are “inverse” or “exponential”.

– maxDistance is set to 100. This means that once the listener is farther than 100 units from the sound, the volume will be 0. You can’t hear the sound anymore

– panningModel is set to “equalpower” following the specifications. The other available option is “HRTF”. The specification says it’s: “a higher quality spatialization algorithm using a convolution with measured impulse responses from human subjects. This panning method renders stereo output”. This is the best algorithm when using a headphone.

maxDistance is only used when using the “linear” attenuation. Otherwise, you can tune the attenuation of the other models using the rolloffFactor and refDistance options. Both are set to 1 by default but you can change it of course.

Move into the scene using keyboard & mouse. Each sound is represented by a purple sphere. When you’re entering a sphere, you’ll start hearing one the music. The sound is louder at the center of the sphere and fall down to 0 when leaving the sphere. You’ll also notice that the sound is being balanced between the right & left speakers as well as front / rear (which is harder to notice using only classic stereo speakers).

Attaching a sound to a mesh

The previous demo is cool but didn’t reach my “simple & powerful” bar. I wanted something even more simple to use. I finally ended up with the following approach.

Simply create a BABYLON.Sound, attach it to an existing mesh and you’re done! Only 2 lines of code! If the mesh is moving, the sound will move with it. You have nothing to do, everything else is being handled by Babylon.js.

Calling the attachToMesh() function on a sound will transform it automatically into a spatial 3D sound. Using the above code, you’ll fall into default Babylon.js values: a linear attenuation with a maxDistance of 100 and a panning model of type “equalpower”.

Don’t move the camera. You should hear my music rotating around your head as the cube is moving. Want to have an even better experience with your headphone?

Launch the same demo into a browser supporting HRTF (like Chrome or Firefox), click on the “Debug layer” button to display it and scroll down to the audio section. Switch the current panning model from “Normal speakers” (equalpower) to “Headphone” (HRTF). You should enjoy a slightly better experience. If not, change your headphone… or your ears.

Note: you can also move into the 3D scene using arrow keys & mouse (like in a FPS game) to check the impact of the 3D sound positioning.

Creating a spatial directional 3D sound

By default, spatial sounds are omnidirectional. But you can have directional sounds if you’d like to.

Note: directional sounds only work for spatial sounds attached to a mesh.

Going further

Try to find the interactive objects in the Mansion scene, they’re almost all associated to a sound. To build this scene, our 3D artist, Michel Rousseau, hasn’t use a single line of code! He made it entirely using our new Actions Builder tool and 3DS Max. The Actions Builder is indeed currently only available in our 3DS Max exporter. We will soon expose it into our sandbox tool. If you’re interested in building a similar experience, please read those articles:

Building a demo using multitracks & analysers

In order to show you what you can quickly do with our Babylon.js audio stack, I’ve built this fun demo:

It uses several features from Babylon.js such as the assets loader that will automatically load the remote WAV files and display a default loading screen, multi sounds tracks and several analyser objects being connected to the sound tracks.

Please have a look to the source code of multitracks.js via your F12 tool to check how it works.

This demo is based on a music I’ve already composed in the past that I’ve slightly re-arranged: The Geek Council. You can do your own mixing for the first part of the music by playing with the sliders. You can click as many time and as quickly as possible on the “Epic cymbals” button to add a dramatic effect. Finally, click on the “Initiate epic end sequence”. It will wait for the end of the current loop and will stop the 4 first tracks (drums, epic background, pads & horns) to play the final epic sequence! Click on the “Reset” button to restart from the beginning.

Note: it turns out that to have great sounds loops, without any glitch, you need to avoid using MP3 as explained in this forum: “The simplest solution is to just not use .mp3. If you use .ogg or .m4a instead you should find the sound loops flawlessly, as long as you generate those files from the original uncompressed audio source, not from the mp3 file! The process of compressing to mp3 inserts a few ms of silence at the start of the audio, making mp3 unsuited to looping sounds. I read somewhere that the silence is actually supposed to be file header information but it incorrectly gets treated as sound data!”. That’s why, I’m using WAV files in this demo. It’s the only clean solution working accross all browsers.

And what about the Music Lounge demo?

The music lounge demo is basically re-using everything being described before. But you’re missing a last piece to be able to recreate it as it’s based on a feature I haven’t explained yet: the customAttenuationFunction.

You’ve got the option to provide your own attenuation function to Babylon.js to replace the default one provided by Web Audio (linear, exponential and inverse) as explained in our documentation.

For instance, if you’re trying this playground demo, you’ll see I’m using this stupid attenuation function:

Near the object, volume is almost 0. The further, the louder. It’s completely counter intuitive but it’s a sample to give you the idea.

In the case of the Music Lounge demo, it has been very useful. Indeed, the volume of each track doesn’t depend on the distance of each cube from the camera but on the distance between the cube and the center of the white circle. If you’re looking at the source code, the magic happens here:

We’re computing the distance between the current cube (this._box) and the (0,0,0) coordinates which happens to be the center of the circle. Based on this distance, we’re setting a linear attenuation function for the volume of the sound associated to the cube. Rest of the demo is using sounds attached to the various meshes (the cubes) and an analyser object connected to the global volume to animate the outer circle made of particles.

I hope you’ll enjoy this article as much as I had writing it. I also hope that it will bring you newideas to create cool Web Audio / WebGL applications or demos!

As the co-author of Babylon.js, a WebGL gaming engine, I was always felt a little uneasy listening to folks discuss accessibility best practices at web conferences. The content created with Babylon.js is indeed completely inaccessible to blind people. Making the web accessible to everyone is very important. I’m more convinced than ever about that as I’m personally touched via my own son. And so I wanted to contribute to the accessibility of the web in some way.

That’s why I decided to work on creating a game that uses WebGL and is fully accessible, to prove that visual games aren’t inherently inaccessible. I chose to keep it simple, so I created a breakout clone, which you can see in action in the following YouTube video:

Now, let me share with you the background story of this game and all the experiments involved…

Once upon a time

It all started during the Kiwi Party 2014 conference, while listening to Laura Kalbag’s talk about guidelines for top accessible design considerations. I was discussing with Stéphane Deschamps, a lovely, funny & talented guy about my lack of knowledge on how to make WebGL accessible and how I could avoid people creating lots of inaccessible content. To motivate me, he challenged me. Probably without estimating the consequences: “it would be very cool if you’d manage to create an accessible breakout game!”. Boom. The seed of what you see here got put in my brain right there and then. I started thinking about that in earnest and researched on how I could create such an experience.

First, I discovered that there were already accessible audio games available at audiogames.net and game-accessibility.com. I also researched best practices for creating games for blind people. While interesting to read, I wasn’t finding what I was looking for. I didn’t want to create a dedicated experience for blind people, I wanted to create a universal game, playable by anybody, regardless of ability. I’m convinced that the web was created for this reason and my dream was to embrace this philosophy in my game. I wanted to create a unique experience that could be played by all kind of users so they could share in the joy together. I wanted great visuals & sounds, not a “look it’s accessible, that’s why it can’t be as good” solution.

The beauty of the SVG viewbox is that it perfectly scales across sizes & resolutions

This became the baseline for my experiments.

For audio, I had several ideas. The main trick I wanted to use was spatial sound to enable people to know where they are on the board without having to see the screen. This can be achieved using Web Audio. As I didn’t have access to a visually impaired tester, I ‘cheated’ by closing my eyes while wearing a good set of headphones. You’ll see later that testing the game with a real blind user helped me fix a lot more things, but as a start, this was an OK way to test the game.

I then replaced the sound emitted by the mouse cursor with the position of the ball in the game. Testing that didn’t work out as well as I’d hoped. It was too complex to understand exactly where the ball was on the screen by sound alone and you couldn’t predict the direction of the ball like you can when you see the screen. Still, I thought it was interesting to emit some 3D sounds when the ball was hitting something—a brick or one of the walls. It was information that could be useful to anybody, so I kept that part.

As I’m also a composer in my spare time, my next idea was to use a specific piano note for every brick’s column, thereby adding a sense of what is left and right. By default, I chose to have 8 columns to cover an octave. I coded it and… it was fun, but didn’t help the gameplay.

I knew I needed help, so I showed what I did to my my oldest son and he came up with the best solution. He told me it would make sense to use the play rate and effect of the sound to provide information about where the ball was. After several tests, I’ve ended up with the following algorithm:

– If the ball is perfectly vertically aligned with the paddle, play the sound at the “normal” rate – If the ball is not aligned with the paddle, slow down the play rate. The farther the ball is from the paddle, the slower the sound will be. It will give an immediate feedback to blind people that the ball is not aligned anymore and they need to move the paddle to avoid missing the ball – Play the sound of the music in a spatialized way: 0 on the X axis if the ball is at the center of the paddle, and –value and +value on the X axis based on the distance of the ball from the paddle.

The first tests of this algorithm were very encouraging—I was almost able to play the game while closing my eyes. After a while, I tweaked the gameplay and algorithm to address some issues I was seeing. You can’t anticipate the ball direction when you can’t see it, so it was too difficult to move the paddle when the music was suddenly slowing down. You just couldn’t adjust the paddle position in time.

To address this, I added some tolerance. First of all, the paddle is twice as wide in the “accessible mode” to compensate for not being able to see it. Secondly, I slow down the ball once it reaches 80% of the vertical screen in order to give users a little more time to bounce it before it hits the ground. Finally, I changed the play rate as soon as the ball is not aligned with 66% of the paddle width. The rest of the paddle still works for the ball collision, but using this approach enables a blind user to anticipate when the ball is about to miss the paddle.

I was very happy with the game using these gameplay parameters. I’ve been testing the game with several of my colleagues who were able to play the game while closing their eyes. But they all knew what a breakout game should look like and, thus, their brain was able to more or less anticipate the gameplay mechanics already. They were conditioned.

My ultimate test was during Paris Web 2014, an awesome and well-known conference in France. My goal was to finish a first draft of the game for the famous lightning talks. I was a bit nervous about what I’d done and met again Stéphane to share my worries. He told me I should talk to Sylvie Duchateau, who is a blind woman involved in web accessibility, to describe what I’d done and do a quick test with her.

During one of the breaks, I shared my project and the audio gameplay ideas behind it with her. To my surprise, she told me she didn’t know what a breakout game was! Which is obvious once you think about it. If you can’t see, a purely visual game doesn’t have much appeal to you. However, she found the idea of a game with spatial audio interesting, so we gave it a go.

She put on my headset, and I started the game… to my dismay, she wasn’t able to play the game at all. There was too much audio information to precisely decide what to do. Should I move left or right now? After a brief discussion with her, she told me I should remove some audio details. She also suggested that I avoid using web audio spatialization for the music (it was moving from the center to the left or right based on the distance from the paddle) and instead enable only the right or left speaker in order to provide very clear instruction on what to do. I quickly fixed the code while she was there and the she was immediately able to break her 2 first bricks. I was so happy, you can’t even imagine. She even asked me what was the best score to beat, which means that I reached my goal of delivering an accessible game—at least for the visually impaired.

Other ideas I added to the game

I can’t remember every trick I’ve tried in order to optimize the gameplay to be “universal”, so I’ll wrap up with what I’ve implemented.

Speech synthesis

Some users may not be able to see how many bricks are left. Similarly, they have no way of knowing if they have won or lost, based on the visuals. That’s why I thought it was a good idea to use the Web Audio speech librarymeSpeak.js to add audio clues. However, after discussing with Anthony Ricaud and a bunch of other people at the event, it turns out that was not the best solution. The issue was that I would be forcing a specific voice and speed in my code. Users of assistive technology, however, already have preferred settings —a certain voice at a defined speed. It is therefore better to use an ARIA Live Region to update the user during gameplay. I’m sure there is more I can do too; feel free to enhance my code if you’d like to, I’d appreciate it.

The speech synthesis currently tells you the number of bricks left to break, that the game has started or ended (by losing or winning), and your final score. As values in an ARIA live region, screenreaders will automatically read this information to the user. Visual users don’t need a robot voice to tell them what’s going on.

SVG styling

I decided to use SVG for this game for several reasons: it scales perfectly on all screens as it’s vector based, it can be coupled with CSS for the design, and, last but not least, it works perfectly well with ARIA. I’ve already mentioned the scaling part earlier in this article and I haven’t done enough research into where ARIA (apart from Live Regions) could be useful with SVG in this case.

CSS, on the other hand, was very helpful to me. As a reminder, my goal was to have the same game, with the same code base, being used by anybody. When you’re loading the game, I load the default style sheet with optimisations for the visually impaired. Here’s why:

– If you can’t see or you only see partially, it is better to start with the high contrast visuals. I’m loading “indexvi.css” by default to have a high contrast colors using yellow & black. I’m also disabling the background WebGL canvas to reduce the visual clutter. If you can see and don’t like this, you can uncheck the appropriate options and get the starfield and less vivid visuals.

– If you can’t see at all, you can disable the “visual impaired” option to enable high quality graphics. This will load the “index.css” style-sheet and enable the WebGL background. Thanks to the beauty of SVG mixed with CSS, we just need to load this new style-sheet and the rest happens automatically. Of course someone who can’t see doesn’t care about having poor or great graphics. But it’s better for people watching you play as it shows that accessible games don’t have to look basic.

– If you can see clearly, uncheck all options. You’ll have great visuals and the speed and paddle width will be adjusted to be more difficult. You also won’t get the audio clues as to how many bricks are left and whether you won or lost. That would be unnecessary—it should be pretty obvious.

In conclusion, here is the workflow:

– At first launch of the game, we anticipate a visual impairment and give you a high contrast version of the game:

– If you can’t see at all, you can uncheck the “Visual Impaired” option to enable great graphics for your surrounding audience. The paddle width remains the same and you still have audio assistance:

– If you don’t have any visual impairments, you can uncheck everything to make the paddle more narrow and the ball speed faster:

Ideas not implemented & conclusion

The challenge I gave myself was to have a great gaming experience independent of a person’s ability to see. I know that I haven’t completely fulfilled this commitment—for example, if you can’t see at all, you don’t know where the remaining bricks to break are on the screen whereas if you can see or have minor visual impairments, you can likely locate the remaining bricks and adjust the ball direction to break them.

My initial idea was to use speech synthesis once there are only 10 bricks left. It could say something like: “4 bricks are on the left, 4 on the center and 2 on the right”. Still, this is not very precise and it remains difficult to change the direction of the ball without visuals. But maybe one of you will find a cool & elegant solution to solve that (hint, hint).

Still, I’m quite happy about this challenge and I had a lot of fun trying to solve it. I’ve learned a lot by reading articles dealing with accessibility. I also hope I’ve proven that accessibility can be provided to people, even in unexpected areas, by simply thinking about what’s possible. Last but not least, I learned that by enabling accessibility in your games, you can improve the experience for everybody.

I often get questions from developers like, “with so many touch-enabled devices on phones and tablets, where do I start?” and “what is the easiest way to build for touch-input?” Short answer: “It’s complex.” Surely there’s a more unified way to handle multi-touch input on the web – in modern, touch-enabled browsers or as a fallback for older browsers. In this article I’d like to show you some browser experiments using Pointers – an emerging multi-touch technology and polyfills that make cross-browser support, well less complex. The kind of code you can also experiment with and easily use on your own site.

The reason why I experiment with Pointer Events is not based on device share – it’s because Microsoft’s approach to basic input handling is quite different than what’s currently available on the web and it deserves a look for what it could become. The difference is that developers can write to a more abstract form of input, called a “Pointer.” A Pointer can be any point of contact on the screen made by a mouse cursor, pen, finger, or multiple fingers. So you don’t waste time coding for every type of input separately.

Article updated the 08/10/2015 to reflect the support of non-prefixed CR version of Pointer Events in IE11/Microsoft Edge/Firefox Nightly & Windows 8.1/10 apps.

The concepts

We will begin by reviewing apps running inside Internet Explorer 11, Microsoft Edge or Firefox Nightly which exposes the Pointer Events API and then solutions to support all browsers. After that, we will see how you can take advantage of IE/MS Edge gestures services that will help you handling touch in your JavaScript code in an easy way. As Windows 8.1/10 and Windows Phone 8.1/Mobile 10 share the same browser engine, the code & concepts are identical for both platforms. Moreover, everything you’ll learn on touch in this article will help you do the very same tasks in Windows Store apps built with HTML5/JS, as this is again the same engine that is being used.

The idea behind the Pointer is to let you addressing mouse, pen & touch devices via a single code base using a pattern that match the classical mouse events you already know. Indeed, mouse, pen & touch have some properties in common: you can move a pointer with them and you can click on element with them for instance. Let’s then address these scenarios via the very same piece of code! Pointers will aggregate those common properties and expose them in a similar way than the mouse events.

The most obvious common events are then: pointerdown , pointermove & pointerup which directly map to the mouse events equivalent. You will have the X & Y coordinates of the screen as an output.

But of course, there could be also some cases where you want to address touch in a different manner than the default mouse behavior to provide a different UX. Moreover, thanks to the multi-touch screens, you can easily let the user rotate, zoom or pan some elements. Pens/stylus can even provide you some pressure information a mouse can’t. The Pointer Events will still aggregate those differences and will let you build some custom code for each devices’ specifics.

Note: it would be better to test the following embedded samples if you have a touch screen of course on a Windows 8.1/10 device or if you’re using a Windows Phone 8+. Still, you can have some options:

Handling simple touch events

Step 1: do nothing in JS but add a line of CSS

Let’s start with the basics. You can easily take any of your existing JavaScript code that handles mouse events and it will just work as-is using some pens or touch devices in Internet Explorer 10/11 & MS Edge. IE & MS Edge are indeed firing mouse events as a last resort if you’re not handling Pointers Events in your code. That’s why, you can “click” on a button or on any element of any webpage using your fingers even if the developer never thought that one day someone would have done it this way. So any code registering to mouvedown and/or mouseup events will work with no modification at all. But what about mouvemove?

Let’s review the default behavior to answer to that question. For instance, let’s take this piece of code:

It simply draws some blue 10px by 10px squares inside an HTML5 canvas element by tracking the movements of the mouse. To test it, move your mouse inside the box. If you have a touch screen, try to interact with the canvas to check by yourself the current behavior:

Sample 0 : default behavior if you do nothing

Result : only mousedown/up/click works with touch

You’ll see than when you’re moving the mouse inside the canvas element, it will draw some series of blue squares. But using touch instead, it will only draw a unique square at the exact position where you will tap the canvas element. As soon as you will try to move your finger in the canvas element, the browser will try to pan inside the page as it’s the default behavior being defined.

You then need to specify you’d like to override the default behavior of the browser and tell it to redirect the touch events to your JavaScript code rather than trying to interpret it. For that, target the elements of your page that shouldn’t react anymore to the default behavior and apply this CSS rule to them:

The classic use case is when you have a map control in your page. You want to let the user pan & zoom inside the map area but keep the default behavior for the rest of the page. In this case, you will apply this CSS rule only on the HTML container exposing the map.

Now, when you’re moving your finger inside the canvas element, it behaves like a mouse pointer. That’s cool! But you will quickly ask yourself this question: why does this code only track 1 finger? Well, this is because we’re just falling in the last thing IE & MS Edge are doing to provide a very basic touch experience: mapping one of your fingers to simulate a mouse. And as far as I know, we’re using only 1 mouse at a time. So 1 mouse === 1 finger max using this approach. Then, how to handle multi-touch events?

Step 2: use Pointer Events instead of mouse events

Take any of your existing code and replace your registration to “mousedown/up/move” by “pointerdown/up/move” (or “MSPointerDown/Up/Move” in IE10) and your code will directly support a multi-touch experience inside Pointer Events enabled browsers!

You can now draw as many squares series as touch points your screen supports! Even better, the same code works for touch, mouse & pen. This means for instance that you can use your mouse to draw some lines at the same time you are using your fingers to draw other lines.

If you’d like to change the behavior of your code based on the type of input, you can test that through the pointerType property value. For instance, let’s imagine that we want to draw some 10px by 10px red squares for fingers, 5px by 5px green squares for pen and 2px by 2px blue squares for mouse. You need to replace the previous handler (the paint function) with this one:

If you’re lucky enough to have a device supporting the 3 types of inputs (like the Sony Duo 11, the Microsoft Surface Pro or the Samsung Tablet some of you had during BUILD2011), you will be able to see 3 kind of drawing based on the input type. Great, isn’t it?

Still, there is a problem with this code. It now handles all type of inputs properly in browsers supporting Pointer Events but doesn’t work at all in others browsers like IE9, Chrome, Opera & Safari.

Step 3: do feature detection to provide a fallback code

As you’re probably already aware, the best approach to handle multi-browsers support is to do features detection. In our case, you need to test this:

window.PointerEvent

Beware that this only tells you if the current browser supports Pointer. It doesn’t tell you if touch is supported or not. To test support for touch or not, you need to check maxTouchPoints.

In conclusion, to have a code supporting Pointer and falling back properly to mouse events in other browsers, you need a code like that:

While the Pointer Events specification is not yet a standard, you can still already implement code that supports it leveraging David’s Polyfill and be ready for when Pointer Events will be a standard implemented in all modern browsers. With David’s library the events will be propagated to MSPointer on IE10, to Touch Events on Webkit based browsers and finally to mouse events as a last resort. In Internet Explorer 11, MS Edge & Firefox Nightly, it will be simply disabled as they implement the last version of the W3C spec. It’s damn cool! Check out his article to discover and understand how it works. Note that this polyfill will also be very useful then to support older browser with elegant fallbacks to mouse events.

We are also using Hand.js in our WebGL open-source 3d engine: http://www.babylonjs.com . Launch a scene in your WebGL compatible browser and switch the camera to the “Touch camera” for mono-touch control or “Virtual joysticks camera” to use your two thumbs in the control panel:

And you’ll be able to move into the 3d world thanks to the power of your finger(s)!

Recognizing simple gestures

Now that we’ve seen how to handle multi-touch, let’s see how to recognize simple gestures like tapping or holding an element and then some more advanced gestures like translating or scaling an element.

Thankfully, IE10/11 & MS Edge provide a MSGesture object that’s going to help us. Note that this object is currently specific to IE & MS Edge and not part of the W3C specification. Combined with the MSCSSMatrix element in IE11 (MS Edge is rather using the WebKitCSSMatrix one), you’ll see that you can build very interesting multi-touch experiences in a very simple way. CSSMatrix is indeed representing a 4×4 homogeneous matrix that enables Document Object Model (DOM) scripting access to CSS 2-D and 3-D Transforms functionality. But before playing with that, let’s start with the basics.

The base concept is to first register to pointerdown. Then inside the handler taking care of pointerdown, you need to choosewhich pointers you’d like to send to the MSGesture object to let it detect a specific gesture. It will then trigger one of those events: MSGestureTap, MSGestureHold, MSGestureStart, MSGestureChange, MSGestureEnd, MSInertiaStart. The MSGesture object will then take all the pointers submitted as input parameters and will apply a gesture recognizer on top of them to provide some formatted data as output. The only thing you need to do is to choose/filter which pointers you’d like to be part of the gesture (based on their ID, coordinates on the screen, whatever…). The MSGesture object will do all the magic for you after that.

Sample 1: handling the hold gesture

We’re going to see how to hold an element (a simple DIV containing an image as a background). Once the element will be held, we will add some corners to indicate to the user he has currently selected this element. The corners will be generated by dynamically creating 4 div added on top of each corner of the image. Finally, some CSS tricks will use transformation and linear gradients in a smart way to obtain something like this:

2 – create a MSGesture object that will target this very same HTML element 3 – inside the pointerdown handler, add to the MSGesture object the various PointerID you’d like to monitor (all of them or a subset of them based on what you’d like to achieve)

4 – inside the MSPointerHold event handler, check in the details if the user has just started the hold gesture (MSGESTURE_FLAG_BEGIN flag). If so, add the corners. If not, remove them.

Try to just tap or mouse click the element, nothing occurs. Touch & maintain only 1 finger on the image or do a long mouse click on it, the corners appear. Release your finger, the corners disappear.

Touch & maintains 2 or more fingers on the image, nothing happens as the Hold gesture is only being triggered only if 1 unique finger is holding the element.

Note: the white border, the corners & the background image are set via CSS defined in toucharticle.css. Corners.js simply creates 4 DIVs (with the append function) and places them on top of the main element in each corner with the appropriate CSS classes.

Still, there is something I’m not happy with in the current result. Once you’re holding the picture, as soon as you’re slightly moving your finger, the MSGESTURE_FLAG_CANCEL flag is raised and caught by the handler which removes the corners. I would rather like to remove the corners only once the user releases his finger anywhere above the picture or as soon as he’s moving his finger out of the box delimited by the picture. To do that, we’re going to remove the corners only on the pointerup or the pointerout. This gives this code instead:

var myGreatPic = document.getElementById("myGreatPicture");
// Creating a new MSGesture that will monitor the myGreatPic DOM Element
var myGreatPicAssociatedGesture = new MSGesture();
myGreatPicAssociatedGesture.target = myGreatPic;
// You need to first register to pointerdown to be able to
// have access to more complex Gesture events
myGreatPic.addEventListener("pointerdown", pointerdown, false);
myGreatPic.addEventListener("MSGestureHold", holded, false);
myGreatPic.addEventListener("pointerup", removecorners, false);
myGreatPic.addEventListener("pointerout", removecorners, false);
// Once touched, we're sending all pointers to the MSGesture object
function pointerdown(event) {
myGreatPicAssociatedGesture.addPointer(event.pointerId);
}
// This event will be triggered by the MSGesture object
// based on the pointers provided during the pointerdown event
function holded(event) {
// The gesture begins, we're adding the corners
if (event.detail === event.MSGESTURE_FLAG_BEGIN) {
Corners.append(myGreatPic);
}
}
// We're removing the corners on pointer Up or Out
function removecorners(event) {
Corners.remove(myGreatPic);
}
// To avoid having the equivalent of the contextual
// "right click" menu being displayed on the pointerup event,
// we're preventing the default behavior
myGreatPic.addEventListener("contextmenu", function (e) {
e.preventDefault(); // Disables system menu
}, false);

Sample 2: handling scale, translation & rotation

Finally, if you want to scale, translate or rotate an element, you simply need to write a very few lines of code. You need to first to register to the MSGestureChange event. This event will send you via the various attributes described in the MSGestureEvent object documentation like rotation, scale, translationX, translationY currently applied to your HTML element.

Even better, by default, the MSGesture object is providing an inertia algorithm for free. This means that you can take the HTML element and throw it on the screen using your fingers and the animation will be handled for you.

Lastly, to reflect these changes sent by the MSGesture, you need to move the element accordingly. The easiest way to do that is to apply some CSS Transformation mapping the rotation, scale, translation details matching your fingers gesture. For that, use the MSCSSMatrix element.

In conclusion, if you’d like to handle all this cool gestures to the previous samples, register to the event like that:

Try to move and throw the image inside the black area with 1 or more fingers. Try also to scale or rotate the element with 2 or more fingers. The result is awesome and the code is very simple as all the complexity is being handled natively by IE / MS Edge.

Video & direct link to all samples

If you don’t have a touch screen experience available for IE / MS Edge and you’re wondering how these samples works, have a look at this video where I’m describing all samples shared in this article on the Samsung BUILD2011 tablet:

Logically, with all the details shared in this article and associated links to other resources, you’re now ready to implement the Pointer Events model in your websites & Windows Store applications. You have then the opportunity to easily enhance the experience of your users in Internet Explorer 10/11, Microsoft Edge and soon every Firefox users!

]]>https://blogs.msdn.microsoft.com/davrous/2015/08/10/unifying-touch-and-mouse-how-pointer-events-will-make-cross-browsers-touch-support-easy/feed/19Experimenting ECMAScript 6 in an easy way on Babylon.js with TypeScript 1.5https://blogs.msdn.microsoft.com/davrous/2015/07/23/experimenting-ecmascript-6-in-an-easy-way-on-babylon-js-with-typescript-1-5/
https://blogs.msdn.microsoft.com/davrous/2015/07/23/experimenting-ecmascript-6-in-an-easy-way-on-babylon-js-with-typescript-1-5/#commentsThu, 23 Jul 2015 07:41:20 +0000https://blogs.msdn.microsoft.com/davrous/2015/07/23/experimenting-ecmascript-6-in-an-easy-way-on-babylon-js-with-typescript-1-5/I’m definitely more than happy that we’ve decided more than one year ago to switch over to TypeScript. David has already explained the reasons behind: Why we decided to move from plain JavaScript to TypeScript for Babylon.js

Thanks to TypeScript, we’ve been able to improve the quality of our code, improve our productivity and create our fabulous Playground we’re so proud of: http://www.babylonjs-playground.com/ which provides auto-completion in the browser! We’ve also been able to welcome some new team members coming from a C# background and few JS skills with no pain. But thanks to the TypeScript compiler, we can also test the future without rewriting a single line of code!

David and I are still coding babylon.js using Visual Studio and TFS while pushing in a regular manner our code to the github repo. By upgrading our project to Visual Studio 2015 RTM, we’ve been able to upgrade it to TypeScript 1.5.

Once done, let me show you how quickly I’ve upgraded Babylon.js from ES5 to ES6. Right-click on your project, navigate to “TypeScript Build” and switch the “ECMAScript version” from ES5 to ES6:

Microsoft Edge and ES6

You’ll then note you need a very recent modern browser such as Microsoft Edge to be able to execute this demo & code. MS Edge with Firefox 41 are currently the most advanced browsers in ES6 support. You can check your current browser ES6 support by navigating to this URL: http://kangax.github.io/compat-table/es6/

Launching it in MS Edge on Windows 10 (build 10240), you’ll have those results:

67% of ES6 features supported in MS Edge and 68% in Firefox 41 out of the box. Those results are impressive!

If you now run again the ES6 compatibility tool: http://kangax.github.io/compat-table/es6/, you’ll see now that MS Edge is jumping to 81% of ES6 features supported. It now supports class, super & generators:

To make this demo works in Firefox or Chrome, you’ll probably need a nightly build.

Time to play with it in F12

Now that MS Edge is properly configured, navigate to: http://www.babylonjs.com/indexES6.html and open F12 in a separate window. In the “Debugger” tab, open “babylon.gamepadCamera.js” and set a breakpoint on the “super” line of code:

Launch the “Mansion” demo and switch the camera to “Gamepad Camera”:

You’ll properly break into the code as expected:

Press F11 to jump into the extended class (BABYLON.FreeCamera):

You’re currently debugging ES6 code! Isn’t that cool?

In the “Debugger” tab, open “babylon.virtualJoystick.js” and set a breakpoint on line 78 inside the arrow function:

Switch the camera to “Virtual joysticks camera”, touch the screen or left click it and you’ll be able to debug the arrow function:

WebGL testing

Once installed, you can launch the tool:

And choose the image you’re interested in:

Let’s choose “5’’ Lollipop (5.0) XXHDPI Phone – Similar to Samsung Galaxy S4” and press play. If it’s the first time you’re launching the emulator, it will configure the Hyper-V network settings for you.

Once started, launch the default installed browser and try for instance to navigate on http://www.babylonjs.com, the best available WebGL framework to date . If you try to launch one of our scenes, you’ll see that error raised:

Indeed, the default browser shipped with this Lollipop image doesn’t support WebGL. We need to install Chrome on it.

Search for a x86 version of the Chrome APK such as this one: Chrome 43.0.2357.93 (x86) and drag’n’drop the APK directly into the emulator. It should install it:

But if you’re navigating again to the same URL with Chrome, you will still have the same error. This time, this is because Chrome hasn’t enabled WebGL as the emulator is not part of his whitelist. To force it, navigate to “about:flags” and enable this option: “Override software rendering list”

Let’s first review how to debug your layer on Android & Windows Phone emulators. For instance, I’m currently playing with Flexbox during my spare time to improve the Babylon.js website. Thanks to the Modernizr plug-in, you can see that Flexbox is supported by the emulator and you can even review the layout size via the DOM Explorer:

For instance, you see in the Android emulator (on the left) the “Mansion” flexbox item highlighted, its size is currently “172px x 112px”.

Let’s review the same site on the Windows Phone emulator (on the right):

Of course, Flexbox is also supported by IE11 Mobile and this time, the same flexbox item is currently “140px x 91px”.

Another interesting feature is the interactive console. Sometime, using WebGL, it’s hard to know why your code failed on a mobile device. This is often because the mobile’s GPU is not supporting a specific feature or because a shader doesn’t compile. This is for instance the case of our “Depth of field / end” demo. The shader is too complex for Windows Phone and you can simply verify it with our tool:

We’ve got plenty other plugins that could help you and we’re currently working in adding new one to even go further. And who knows, we will maybe have one for babylon.js in a near future.