On November 12, a new update was made available for Xbox One. I really love the new dashboard and the Xbox 360 backward compatibility as a gamer. But as a web developer, I’m more than happy to now have Microsoft Edge running on my console! This means that you can now run very modern content inside the Xbox One browser!

Testing Your HTML5 Content on Xbox One and Microsoft Edge

It supports for instance WebGL, Web Audio, Gamepad API as you can see in this video:

We’re using the Babylon.js Mansion demo on the Xbox One. It runs perfectly fine! I’m still amazed by the beauty of the web standards. If you’re following best practices, your code will run everywhere! Even better, I’m using the Xbox Windows Store app in this video to stream the video output of my Xbox One on my Windows 10 PC. This mean that I can test my web content on my Xbox One without leaving my chair ;)

Thanks to the Xbox Windows Store app, you can remotely test the MS Edge Xbox One browser from your Windows 10 PC!

By simply adding a single script reference at the beginning of your HTML page, you can remote debug your site from any browser using the Vorlon.js dashboard. In this video, I’m checking the support for the Gamepad API using the Modernizr plug-in, using the interactive console to check for potential errors and to execute JavaScript on the Xbox One and finally the DOM Explorer to update the HTML and some Flexbox properties. Again, using the Xbox Windows Store app is very useful to live debug the page. Simply snap it on the left and snap the Vorlon.js dashboard on the right. Even better, use multiple screens on your Windows 10 PC:

I’m definitely more than convinced that Web Standards and WebGL offer great new possibilities for the gaming industry! I’ve talked about this in a previous article: The web: the next game frontier? It seems to really start to be true. ;-)

]]>https://www.sitepoint.com/debugging-html5-xbox-one-ms-edge-xbox-windows-store-app-vorlon-js/feed/2Creating Fun and Immersive Audio Experiences with Web Audiohttps://www.sitepoint.com/creating-fun-immersive-audio-experiences-web-audio/
Thu, 19 Nov 2015 20:00:05 +0000http://www.sitepoint.com/?p=119110Today, thanks to the power of the Web Audio API, you can create immersive audio experiences directly in the browser – there’s no need for any plug-ins here. Today I’d like to share with you what I’ve learned while building the audio engine of our Babylon.js open-source gaming engine.

Web Audio in a nutshell

If you ever tried to do something else than streaming some sounds or music using the HTML5 audio element, you know how limited it was. Web Audio allows you to break all the barriers and provides you access to the complete stream and audio pipeline like in any modern native application. It works thanks to an audio routing graph made of audio nodes. This gives you precise control over time, filters, gains, analyzer, convolver and 3D spatialization.

It’s being widely supported now (Microsoft Edge, Chrome, Opera, Firefox, Safari, iOS, Android, FirefoxOS and Windows Mobile 10). In Edge (but I’m guessing in other browsers too), it’s been rendered in a separate thread than the main JS thread. This means that almost all the time, it will have few if no performance impact on your app or game. The codecs supported are at least WAV and MP3 by all browsers and some of them also support OGG. Edge even supports the multi-channel Dolby Digital Plus™ audio format!! You then need to pay attention to that to build a web app that will run everywhere and provides potentially multiple sources for all browsers.

Audio routing graph explained

Note:this picture has been built using the graph displayed into the awesome Web Audio tab of the Firefox DevTools. I love this tool. :) So much that I’ve planned to mimic it via a Vorlon.js plug-in. I’ve started working on it but it’s still very early draft.

Let’s have a look to this diagram. Every node can have something as an input and be connected to the input of another node. In this case, we can see that MP3 files act as the source of the AudioBufferSource node, connected to a Panner node (which provides spatialization effects), connected to Gain nodes (volume), connected to an Analyser node (to have access to frequencies of the sounds in real-time) finally connected to the AudioDestination node (your speakers). You see also you can control the volume (gain) of a specific sound or several of them at the same time. For instance, in this case, I’ve got a “global volume” being handled via the final gain node just before the destination and some kind of tracks’ volume with some gains node placed just before the analyser node.

]]>How Pointer Events Will Make Cross-Browser Touch Support Easyhttps://www.sitepoint.com/pointer-events-will-make-cross-browsers-touch-support-easy/
https://www.sitepoint.com/pointer-events-will-make-cross-browsers-touch-support-easy/#commentsWed, 23 Sep 2015 15:30:53 +0000http://www.sitepoint.com/?p=114987This article is part of a web development series from Microsoft. Thank you for supporting the partners who make SitePoint possible.

I often get questions from developers like, “with so many touch-enabled devices on phones and tablets, where do I start?” and “what is the easiest way to build for touch-input?” Short answer: “It’s complex.” Surely there’s a more unified way to handle multi-touch input on the web – in modern, touch-enabled browsers or as a fallback for older browsers. In this article I’d like to show you some browser experiments using Pointers – an emerging multi-touch technology and polyfills that make cross-browser support, well less complex. The kind of code you can also experiment with and easily use on your own site.

The reason why I experiment with Pointer Events is not based on device share – it’s because Microsoft’s approach to basic input handling is quite different than what’s currently available on the web and it deserves a look for what it could become. The difference is that developers can write to a more abstract form of input, called a “Pointer.” A Pointer can be any point of contact on the screen made by a mouse cursor, pen, finger, or multiple fingers. So you don’t waste time coding for every type of input separately.

The Concepts

We will begin by reviewing apps running inside Internet Explorer 11, Microsoft Edge, or Firefox Nightly which exposes the Pointer Events API and then solutions to support all browsers. After that, we will see how you can take advantage of IE/MS Edge gestures services that will help you handling touch in your JavaScript code in an easy way. As Windows 8.1/10 and Windows Phone 8.1/Mobile 10 share the same browser engine, the code & concepts are identical for both platforms. Moreover, everything you’ll learn on touch in this article will help you do the very same tasks in Windows Store apps built with HTML5/JS, as this is again the same engine that is being used.

The idea behind the Pointer is to let you addressing mouse, pen & touch devices via a single code base using a pattern that match the classical mouse events you already know. Indeed, mouse, pen & touch have some properties in common: you can move a pointer with them and you can click on element with them for instance. Let’s then address these scenarios via the very same piece of code! Pointers will aggregate those common properties and expose them in a similar way than the mouse events.

]]>https://www.sitepoint.com/pointer-events-will-make-cross-browsers-touch-support-easy/feed/9Debug WebGL and HTML5 Mobile Experiences with Visual Studio Emulatorshttps://www.sitepoint.com/debug-webgl-html5-mobile-experiences-visual-studio-emulators/
https://www.sitepoint.com/debug-webgl-html5-mobile-experiences-visual-studio-emulators/#commentsTue, 22 Sep 2015 20:00:00 +0000http://www.sitepoint.com/?p=115089This article is part of a web development series from Microsoft. Thank you for supporting the partners who make SitePoint possible.

With the recent availability of Visual Studio 2015 RTM came the free Visual Studio Emulator for Android. In this article, I’ll show you how to test your WebGL experiences on these very fast Android emulators.

WebGL testing

Once installed, you can launch the tool:

And choose the image you’re interested in:

Let’s choose “5" Lollipop (5.0) XXHDPI Phone – Similar to Samsung Galaxy S4” and press play. If it’s the first time you’re launching the emulator, it will configure the Hyper-V network settings for you.

Once started, launch the default installed browser and try, for instance, to navigate to http://www.babylonjs.com/, the best available WebGL framework to date . If you try to launch one of our scenes, you’ll see an error:

Indeed, the default browser shipped with this Lollipop image doesn’t support WebGL. We need to install Chrome on it.

Search for an x86 version of the Chrome APK such as this one: Chrome 43.0.2357.93 (x86) and drag’n’drop the APK directly into the emulator. It should install it:

]]>https://www.sitepoint.com/debug-webgl-html5-mobile-experiences-visual-studio-emulators/feed/2Experiment with ECMAScript 6 on Babylon.js with TypeScript 1.5https://www.sitepoint.com/experiment-ecmascript-6-babylon-js-typescript-1-5/
https://www.sitepoint.com/experiment-ecmascript-6-babylon-js-typescript-1-5/#commentsThu, 17 Sep 2015 15:30:42 +0000http://www.sitepoint.com/?p=114935This article is part of a web development series from Microsoft. Thank you for supporting the partners who make SitePoint possible. Since releasing babylon.js, the WebGL open-source gaming framework, a couple of years ago, we (with help from the community) are constantly exploring ways to make it even better. I’m definitely more than happy that […]

]]>https://www.sitepoint.com/experiment-ecmascript-6-babylon-js-typescript-1-5/feed/5Creating an Accessible Breakout Game Using Web Audio and SVGhttps://www.sitepoint.com/creating-accessible-breakout-game-using-web-audio-svg/
https://www.sitepoint.com/creating-accessible-breakout-game-using-web-audio-svg/#commentsThu, 10 Sep 2015 15:30:05 +0000http://www.sitepoint.com/?p=114245As the co-author of Babylon.js, a WebGL gaming engine, I was always felt a little uneasy listening to folks discuss accessibility best practices at web conferences. The content created with Babylon.js is indeed completely inaccessible to blind people. Making the web accessible to everyone is very important. I’m more convinced than ever about that as I’m personally touched via my own son. And so I wanted to contribute to the accessibility of the web in some way.

That’s why I decided to work on creating a game that uses WebGL and is fully accessible, to prove that visual games aren’t inherently inaccessible. I chose to keep it simple, so I created a breakout clone, which you can see in action in the following YouTube video:

Now, let me share with you the background story of this game and all the experiments involved…

Once Upon a Time

It all started during the Kiwi Party 2014 conference, while listening to Laura Kalbag’s talk about guidelines for top accessible design considerations. I was discussing with Stéphane Deschamps, a lovely, funny and talented guy about my lack of knowledge on how to make WebGL accessible and how I could avoid people creating lots of inaccessible content. To motivate me, he challenged me. Probably without estimating the consequences: "it would be very cool if you’d manage to create an accessible breakout game!". Boom. The seed of what you see here got put in my brain right there and then. I started thinking about that in earnest and researched on how I could create such an experience.

First, I discovered that there were already accessible audio games available at audiogames.net and game-accessibility.com. I also researched best practices for creating games for blind people. While interesting to read, I wasn’t finding what I was looking for. I didn’t want to create a dedicated experience for blind people, I wanted to create a universal game, playable by anybody, regardless of ability. I’m convinced that the web was created for this reason and my dream was to embrace this philosophy in my game. I wanted to create a unique experience that could be played by all kind of users so they could share in the joy together. I wanted great visuals & sounds, not a "look it’s accessible, that’s why it can’t be as good" solution.

]]>https://www.sitepoint.com/creating-accessible-breakout-game-using-web-audio-svg/feed/5Why and How We Migrated babylon.js to Azure: CORS, gzip, and IndexedDBhttps://www.sitepoint.com/migrated-babylon-js-azure-cors-gzip-indexeddb/
Tue, 05 May 2015 21:00:52 +0000http://www.sitepoint.com/?p=104830You are working for a startup. Suddenly that hard year of coding is paying off, but with success comes more growth and demand for your web app to scale.

In this tutorial, I want to humbly use one of our more recent success stories around ourWebGL open-source gaming framework, babylon.js and its website: babylonjs.com. We have been excited to see so many web gaming devs try it out.

But to keep up with the demand, we knew we needed a new web hosting solution. While this tutorial focused on Microsoft Azure, many of the concepts apply to various solutions you might prefer. We are going to see also the various optimizations we have put in place to limit as much as possible the output bandwidth from our servers to your browser.

Introduction

Babylon.js is a personal project we have been working on for over a year now. As it is a personal (i.e.our time and money), we have hosted the website, textures and 3D scenes on a relatively cheap hosting solution using a small, dedicated Windows/IIS machine. The project started in France, but was quickly under the radar of several 3D and web specialists around the globe as well as some gaming studios. We were happy about the community's feedback but the traffic was manageable!

For instance, between Feb 2014 and April 2014, we had an average of 7K+ users/month with an average of 16K+ pages/viewed/months. Some of the events we have been speaking at have generated some interesting peaks:

But the experience on the website was still good enough. Loading our scenes was not done at stellar speed but users were not complaining that much.

Game over for our little server! It slowly stopped working and the experience for our users was really bad. The IIS server was spending its time serving large static assets and images, and the CPU usage was too high. As we were about to launch the Assassin's Creed Pirates WebGL experience project running on babylon.js, it was time to switch to a more scalable professional hosting by using a cloud solution.

But before reviewing our hosting choices, let's briefly talk about the specifics of our engine and website:

Everything is static on our website. We currently don't have any server-side code running.

Our scenes (.babylon JSON files) and textures (.png or .jpeg) files could be very big (up to 100 MB). This means that we absolutely needed to activate gzip compression on our ".babylon" scene files. Indeed, in our case, the pricing is going to be indexed a lot on the outgoing bandwidth.

Drawing into the WebGL canvas needs special security checks. You can't load our scenes and textures from another server without CORS enabled for instance.

Credits:I'd like to special thank Benjamin Talmard, one of our French Azure Technical Evangelists who helped us move to Azure.

Step 1: Moving to Azure Web Sites & the Autoscale service

As we'd like to spend most of our time writing codeand features for our engine, we don't want losing time on the plumbing. That's why, we immediately decided to choose a PaaS approach and not a IaaS one.

Moreover, we liked Visual Studio integration with Azure. I can almost do everything from my favorite IDE. And even if babylon.js is hosted on Github, we're using Visual Studio 2013, TypeScript and Visual Studio Online to code our engine. As a note for your project, you can getVisual Studio Community and an Azure Trial for free.

Moving to Azure took me approximately 5 min:

I created a new Web Site in the admin page: http://manage.windowsazure.com (could be done inside VS too).

I took the right changeset from our source code repository matching the version that was currently online.

I right-clicked the Web project in the Visual Studio Solution Explorer.

Here comes the awesomeness of the tooling. As I was logged into VS using the Microsoft Account boundto my Azure subscription, the wizard let me simply choose the web site on which I'd like to deploy.

No need to worry about complex authentication, connection strings or whatever.

"Next, Next, Next & Publish" and a couple of minutes later, at the end of the uploading process of all our assets and files, the web site was up and running!

On the configuration side, we wanted to benefit from the cool autoscale service. It would have helped a lot in our previous Hacker News scenario.

First, your instance has be configured in "Standard" mode in the "Scale" tab.

Then, you can choose up to how many instances you'd like to automatically scale, in which CPU conditions and also on which scheduled times. In our case, we've decided to use up to 3 small instances (1 core, 1.75 GB memory) and to auto-spawn a new instance if the CPU goes over 80% of its utilization. We will remove one instance if the CPU drops under 60%. The autoscaling mechanism is always on in our case, we haven't set some specific scheduled times.

The idea is really to only pay for what you need during specific timeframes and loads. I love the concept. With that, we would have been able to handle previous peaks by doing nothing thanks to this Azure service! This what I call a service.

You've got also a quick view on the autoscaling history via the purple chart. In our case, since we've moved to Azure, we never went over 1 instance up to now. And we're going to see below how to minimize the risk into falling into an autoscaling.

To conclude on the web site configuration, we wanted to enable automatic gzip compression on our specific 3D engine resources (.babylon and .babylonmeshdata files). This was critical to us as it could save up to 3x the bandwidth and thus… the price.

Web Sites are running on IIS. To configure IIS, you need to go into the web.config file. We're using the following configuration in our case:

[code]

[/code]

This solution is working pretty well and we even noticed that the time to load our scenes has been reduced compared to our previous host. I'm guessing this is thanks to the better infrastructure and network used by Azure datacenters.

However, I have been thinking about moving into Azure for a whilenow. And my first idea wasn't to let web sites instances serving my large assets. Since the beginning, I was more interested in storing my assets into the blob storage better designed for that. It would offer us also a possible CDN scenario.

The primary reason for using blob storage in our case is to avoid loading the CPU of our web site instances to serve them. If everything is being served via the blob storage except a few html, js&css files, our web site instances will have few chances to autoscale.

But this raises two problems to solve:

As the content will be hosted on another domain name, we will fall into the cross-domain security problem. To avoid that, you need to enable CORS on the remote domain (Azure Blob Storage)

Azure Blob Storage doesn't support automatic gzip compression. And we don't want to lower the CPU web site usage if in exchange we're paying 3x time the price because of the increased bandwidth!

I then just enabled the support for GET and proper headers on my container. To check if everything works as expected, simply open your F12 developer bar and check the console logs:

As you can see, the green log lines imply that everything works well.

Here is a sample case where it will fail. If you try to load our scenes from our blob storage directly from your localhost machine (or any other domain), you'll get these errors in the logs:

In conclusion, if you see that your calling domain is not found in the "Access-Control-Allow-Origin" header with an "Access is denied" just after that, it's because you haven't set properly your CORS rules. It is very important to control your CORS rules; otherwise, anyone could use your assets, and thus your bandwidth, and thus costingmoney without letting you know!

]]>Enhance Your JavaScript Debugging with Cross-Browser Source Mapshttps://www.sitepoint.com/enhance-your-javascript-debugging-with-cross-browser-source-maps/
Fri, 01 May 2015 00:45:59 +0000http://www.sitepoint.com/?p=104513As a JavaScript developer, I’m sure you’ve already been falling into this scenario: something goes wrong with the production version of your code, and debugging it directly from the production server is a nightmare simply because it has been minified or has been compiled from another language such as TypeScript or CoffeeScript.

The good news? The latest versions of browsers can help you solve this problem by using source map. In this tutorial, I’ll show you how to find Source Maps in all of the browsers and get the most out of those few minutes you have to debug.

Wait, what are Source Maps?

According to the great Introduction to JavaScript Source Maps article, source map is “a way to map a combined/minified file back to an unbuilt state. When you build for production, along with minifying and combining your JavaScript files, you generate a source map which holds information about your original files”.

Please don’t hesitate to read Ryan Seddon’s article first as it goes in great details on how source map works. You’ll then learn that source map uses an intermediate file that does the matching between the production version of your code and its original development state. The format of this file is being described here: Source Map Revision 3 Proposal

Now to illustrate, I’m going to share the way we’re currently working while developing our WebGL Babylon.js open-source framework: http://www.babylonjs.com. It’s written in TypeScript. But the principles will remain the same if you’re using plain JavaScript compressed/minified or other languages such as CoffeeScript.

Plug an Xbox 360 or Xbox One controller in the USB port of your machine. Press the A button to activate the gamepad and play with it:

But don’t worry, you won’t need a gamepad controller to follow this tutorial.

Note: The TypeScript compiler is automatically generating the source map for you. If you’d like to generate a source map while generating your minified version of your code, I would recommend using Uglify JS 2: https://github.com/mishoo/UglifyJS2

For this article, I even mixed both. I’ve minified the JS generated by TypeScript and kept the source mapping intact using this command line:

How to debug with the original source code

Using Internet Explorer 11

Once the gamepad test page has loaded, press F12 in IE11.

You’ll see that the HTML source is referencing 2 JavaScript files: babylon.gamepads.js at the beginning of the page & testgamepad.min.js at the very end. The first file is coming from our framework on Github and the second one a simple sample showing how to consume it.

]]>Understanding Collisions and Physics with Babylon.js and Oimo.jshttps://www.sitepoint.com/understanding-collisions-physics-babylon-js-oimo-js/
Wed, 18 Mar 2015 02:55:20 +0000http://www.sitepoint.com/?p=101489This article is part of a web dev tech series from Microsoft. Thank you for supporting the partners who make SitePoint possible.

Today, I’d like to share with you the basics of collisions , physics and bounding boxes by playing with the WebGL babylon.js engine and a physics engine companion named oimo.js.

You can launch it in a WebGL compatible browser —like IE11, Firefox, Chrome, Opera Safari 8, or Project Spartan in Windows 10 Technical Preview —then, move inside the scene like in an FPS game. Press the “s” key to launch some spheres/balls and the “b” key to launch some boxes. Using your mouse, you can click on one of the spheres or boxes to apply some impulse force on it also.

Understanding collisions

“Collision detection typically refers to the computational problem of detecting the intersection of two or more objects. While the topic is most often associated with its use in video games and other physical simulations, it also has applications in robotics. In addition to determining whether two objects have collided, collision detection systems may also calculate time of impact (TOI), and report a contact manifold (the set of intersecting points). [1]Collision response deals with simulating what happens when a collision is detected (see physics engine, ragdoll physics). Solving collision detection problems requires extensive use of concepts from linear algebra and computational geometry.”

Let’s now unpack that definition into a cool 3D scene that will act as our starting base for this tutorial.

You can move in this great museum as you would in the real world. You won’t fall through the floor, walk through walls, or fly. We’re simulating gravity. All of that seems pretty obvious but requires a bunch of computation to simulate that in a 3D virtual world. The first question we need to resolve when we think about collisions detection is how complex it should be? Indeed, testing if 2 complex meshes are colliding could cost a lot of CPU, even more with a JavaScript engine where it’s complex to offload that on something other than the UI thread.

To better understand how we’re managing this complexity, navigate into the Espilit museum near this desk:

You’re blocked by the table even if there seem to be some space available on the right. Is it a bug in our collision algorithm? No, it’s not (babylon.js is free of bugs! ;-)). It’s because Michel Rousseau, the 3D artist who has built this scene, has done this by choice. To simplify the collision detection, he has used a specific collider.

What’s a collider?

Rather than testing the collisions against the complete detailed meshes, you can put them into simple invisible geometries. Those colliders will act as the mesh representation and will be used by the collision engine instead. Most of the time, you won’t see the differences but it will allow us to use much less CPU as the math behind that is much simpler to compute.

Every engine supports at least 2 types of colliders: the bounding box and the bounding sphere. You’ll better understand by looking at this picture:

This beautiful yellow deck is the mesh to be displayed. Rather than testing the collisions against each of its faces, we can try to insert it into the best bounding geometry. In this case, a box seems a better choice than a sphere to act as the mesh impostor. But the choice really depends on the mesh itself.

Let’s go back to the Espilit scene and display the invisible bounding element in a semitransparent red color:

You can now understand why you can’t move by the right side of the desk. It’s because you’re colliding (well, the babylon.js camera is colliding) with this box. If you’d like to do so, simply change its size by lowering the width to perfectly fit the width of the desk.

Capsule is useful for humans or humanoids as it better fits our body than a box or a sphere. Mesh is almost never the complete mesh itself—rather it’s a simplified version of the original mesh you’re targeting—but is still much more precise than a box, a sphere or a capsule.

Of course, you can still follow this tutorial if you don’t want to use Visual Studio. Here's the code to load our scene. Remember that while most browsers support WebGL now – you should test for Internet Explorer even on your Mac.

Using this material, you will only benefit from the embedded collision engine of Babylon.js. Indeed, we’re making a difference between our collision engine and a physics engine. The collision engine is mostly dedicated to the camera interacting with the scene. You can enable gravity or not on the camera, you can enable the checkCollision option on the camera and on the various meshes. The collision engine can also help you to know if two meshes are colliding. But that’s all (this is already a lot in fact!). The collision engine won’t generate actions, force or impulse after two Babylon.js objects are colliding. You need a physics engine for that to bring life to the objects.

If you’ve chosen "option 1" to load the scene, you then need to download Oimo.js from our GitHub. It’s a slightly updated version we’ve made to better support Babylon.js. If you’ve chosen "option 2", it’s already referenced and available in the VS solution under the scripts folder.

]]>Write a 3D Soft Engine from Scratch: Part 6https://www.sitepoint.com/write-3d-soft-engine-scratch-part-6/
Thu, 24 Oct 2013 13:31:32 +0000http://www.sitepoint.com/?p=72008Here is the final tutorial of this long series. We’re going to see how to apply a texture to a mesh by using mapping coordinates exported from Blender. If you’ve managed to understand the previous tutorials, it will just be some piece of cake to apply some textures. The main concept is once again to […]