Website URL

Twitter

Skype

Location

Interests

Is there any reason why sound is not playing in my WebVR demo in the Oculus browser? I already changed video.muted to false & it still won’t play.
It’s the same demo in the examples only with another video I tried with sound.
Also, it doesn’t autostart even though I told it to.
It only autostarts in non-VR mode on my regular laptop.
Here’s the link ::
http://babylontesting.epizy.com/Three.js/skytime-vr/
Here’s the link to the Source ::
view-source:http://babylontesting.epizy.com/Three.js/skytime-vr/
{ NOTE } :: Copy & paste EVERYTHING above including the ‘view-source:’ portion into a browser

I am trying to import in my Three.js project a gltf model exported from Blender.
The problem is that, in Three.js, all models materials are black (color: Color {r: 1, g: 1, b: 1}) while originally the mesh is green.
I find this topic
(http://www.html5gamedevs.com/topic/41196-object-always-exports-black-3dsmax-gltf-export/)
that show how to solve the problem in 3dsmax, but how to solve for blender?
Here my code in Three.js:
var loader= new THREE.GLTFLoader();
loader.load(
'http://localhost/planegeometryeditor/meshes/map/firstterrain.gltf',
function(gltf) { scene.add(gltf.scene); },
undefined,
function (error) { console.error(error); }
);

I made very simple example (just rewrite Getting Started Example) in TypeScript and Three.js without Angular.
This example shows:
How to compile TS files to AMD modules and load them with Require.js
How to place your examples in Playground (https://plnkr.co/edit/)
And how to use OrbitControl with TS on Playground (It does not work now but I will solve it soon)
Check: https://plnkr.co/edit/yICv96E7lTK8xu7DohJB?p=preview
OrbitControl with TS works locally:
https://github.com/8Observer8/usage-orbitcontrols-in-typescript-on-playground
But it does not want to work on Playground.
P.S. I will be very glad if someone help me and explain me why OrbitControl works locally but does not work on Playground.

Hello,
I am trying to run official Getting Started Example on Playground https://plnkr.co/edit/ with OrbitControls.
The first problem was that official OrbitControls is not friendly with TypeScript and I took: https://github.com/nicolaspanel/three-orbitcontrols-ts
But this module does not use AMD by default and I recompiled it to AMD for usage with RequireJS library because I have a few files: Program.ts and Scene.ts and I can run my example on Playground only with AMD compilation. Recompilation requires going in node_modules folder for recompiling module - it is not a common way.
I created libs folder in my project and copied content of dist to libs/three-orbitcontrols-ts/ it works localy: https://github.com/8Observer8/usage-orbitcontrols-in-typescript-on-playground
I put all files in one directory for usage in Plunker and it works localy: https://github.com/8Observer8/usage-orbitcontrols-in-typescript-on-playground-one-directory
But when I upload the files on Plunker it does not work: https://plnkr.co/edit/yICv96E7lTK8xu7DohJB?p=preview
You will see the error in a console:
Please, help me to solve this problem.

Hey everyone ✌️
My name is Tibo and I’m lead Creative Developer at Voodoo, the leading mobile game publisher. I’m looking for coworkers to make HTML5 games with me :)!
This is a paid opportunity for a job in Paris, France (paid relocation if needed) and actually one of the few job offers for HTML5 game developers on the market. If you’re interested in gaming and would like to enter the industry, are technically savage and eager to learn a lot this really is a gold opportunity!
The goal is to create playable ads, a very new and interactive ad format that asks for nothing else but fresh ideas and technical innovations.
Key Skills
Basic knowledge of HTML & CSS
Expert in JavaScript and able to create gameplays with PIXI.js and/or THREE.js
Interest in mobile games
Misc
Competitive salary + Stock options + Performance Bonus + Company Profit Sharing + simply the best company to work for in the world (seriously!!)
Any questions welcome in replies or PM, do not hesitate to check the full offer right here and apply!
👉 https://jobs.lever.co/voodoo/98ae288d-b923-419a-8148-78274a33eb53

Hello,
I thought to place this on the demos and projects thread, however I decided to post this here as it is more a topic for which framework to use and why. I was hired by an elite software development group at Sony Electronics to help them navigate through WebGL to build a pipeline to deliver content for the South By Southwest convention and to create a foundation to quickly develop games and online media for future projects. In short, I was tasked to escape the limitations of 2D media and help Sony move forward into 3D content taking advantage of the WebGL rendering standards.
This was no esay task, as I was hired Dec. 11th, and was given a hard deadline of March 5 to deliver 2 multiplayer games which were to be the focus of Sony's booth at SXSW in Austin Texas. But first I had to run a quick evaluation and convince a very proficient team of Engineers which framework was the best fit for Sony to invest considerable resources into for SXSW and which was the right coice to take them into future projects. Yhis wa a huge consideration as the WebGL framework which was to be chosen was to play a much greater role at Sony Electronics considering the group I was assigned to works well ahead of the rest of the industry... developing what most likely will be native intelligent applications on Sony devices (especially smartphones) in the near future. These are applications which benefit the consumer in making their day to day interactions simple and informative. Thus the WebGL framework to be chosen needed to be an element in displaying information as well as entertainment for a greater core technology which is developing daily in a unique tool set used by the software engineers to build applications which allows Sony to remain the leader not only in hardware technology, but in the applications which consumers want to use on Sony devices.
But as I was working for Sony, I also had a greater task as there were existing expectations in developing a game on Sony devices which needed to be on par with what consumers already were experiencing with their Playstation consoles. As unrealistic as this might initially appear, that had to be the target as we couldn't take a step back from the quality and playability the consumer was already accustomed to. So back to the first task... selecting the WebGL framework for Sony Electronics to use moving forward. Rather than telling a story, I'll simply outline why there was little discussion as to which framework to choose. Initially Sony requested someone with Three.js experience as is more than often the case. So when they approached me for the position, I told them I would only consider the position if they were open to other frameworks as well. They were very forthcoming to open their minds to any framework as their goal was not political in any way - as they only cared about which framework was going to provide them with the best set of tools and features to meet their needs. And one might certainly assume that since Sony Playstation is in direct competition with Microsoft Xbox, and Microsoft is now providing the resources in house to develop babylon.js, that Sony Electronics might see a PR conflict in selecting babylon.js as their WebGL development framework. However, I'm proud to say that there was never a question from anyone at Sony. I was very impressed that their only goal was to select the very best tools for the development work, and to look beyond the perceived politics and to develop the very best applications for the consumer and to fulfill their obligations to their shareholders in building tools that consumers want on their smartphones and other electronic devices.
So once again... Three.js vs. Babylon.js. This was a very short evaluation. What it came down to was that three.js had far more libraries and extensions - however, this was not the strength of three.js since there is no cohesive development cycles with three.js and although many libraries, tools, and extensions exist, more than often they are not maintained. So it was easy to demonstrate that practically any tool or extension we would require for the SXSW production would require myself or the team updating the extension or tool to be compatible with the other tools we might use on the project. This was due to the failings of the framework since each developer who writes an extension for three.js is writing for a specific compatibility for their own project needs... and not for the overall framework... as this is not within the scope of any developer or group of developers. Thus I find that it requires weeks if not months of of maintenance in three.js prior to building content, just to ensure compatibility between all of the tools and extensions needed to use for most projects. As for babylon.js, the wheel is not generally re-invented as it is with three.js, as most extensions are quickly absorbed into a cohesive framework quickly - provided they have universal appeal - and this integration ensures compatibility as there are fewer and fewer extensions to use, but instead an integrated set of tools which are thoroughly tested and used in production revealing any incompatibilities quickly.
The bottom line is that there are no alpha, beta, and development cycles in three.js, thus no stable releases. Whereas the opposite exists with babylon.js. There is a cohesive development of the tools, and Sony is smart enough to see beyond the politics and to realize that having Microsoft support the development of babylon.js is a huge bonus for an open source framework. And if anyone had to choose a company to support the development of a WebGL or any framework, who better than Microsoft? With practically every other useful WebGL framework in existence spawned by MIT, most all are barely useful at best. And why would anyone pay to use a limited WebGL framework such as PlayCanvas when Babylon.js is far more functional, stable, and free? This baffles me and most anyone who chooses one project using babylon.js. The only argument against babylon.js is that the development of the framework is now supported in house by Microsoft. But for myself and others, this is a positive, not a negative. I've been assured by the creators and lead developers of babylon.js that they have secured an agreement with Microsoft ensuring the framework remain open source and free. This ensures that anyone is able to contribute and review all code in the framework, and that it remains in the public domain. Sony gets this and we quickly moved forward adopting babylon.js as the WebGL framework within at least one division of Sony Electronics.
At the end of this post I'll provide a link on youtube to a news report of not only the games we built for SXSW, but the exciting new technology on built on Sony phones which uses the phones camera to capture a hight resolution (yet optimized) 3D scan of a person's head. This is only a prototype today, but will be a native app on Sony phones in the future. So our task was not only to develop multiplayer games of 15+ players simultaneous in real-time, but to have a continuous game which adds a new player as people come through the booth and using a Sony phone, has their head scanned. This was an additional challenge, and I must say that I was very fortunate to work with a group of extremely talented software engineers. The team at Sony is the best of the best, I must say.
All in all, it was an easy choice in choosing babylon.js for the WebGL framework at Sony Electronics in San Diego. Below is a news report from SXSW which shows the new scanning technoogy in use, as well as a brief example of one of the games on the large booth screen. And using Electron (a stand-alone version of Chromium), I was able to render 15 high resolution scanned heads, vehicles for each head, animation on each vehicle, particles on each vehicle, and many more animations, collisions, and effects without any limitations on the game - all running at approx. 40 fps. The highlight of the show was when the officers from Sony Japan came through the booth... which are the real people we work for... gave their thumbs up, as they were very happy with hat we achieved in such a short time. And these were the people who wanted to see graphics and playability comparable to what the Playstation delivered. And they approved.
Link:
Thanks to babylon.js.
DB

Hey!
I want to create a game thats can run on as many platforms natively. So being a front-end developer i turned to webGL. But i have a few questions:
1 )
Is possible to create big complex and demanding games like:
Unturned
Risk of Rain
Hotline miami
Minecraft
Terraria
Nidhogg
Battle block theater
Both Graphical as Technical.
2 )
If you use a html wrapper to create a .exe, .apk etc. is the source code protected?
Also can you compile to consoles?
3 )
I have read that you can code in c++ and compile it to javascript is that functional? Also is it possbile to write in an high-level strong language and compile that to javascript (i do not like weakly typed languages, and C++ is to low-level for me)
4 )
How does it come that i can not find any big games made in webGL (only tech demo's, fancy websites and games on this forum)
5 )
When i looked around this forum i dident saw any three.js based games. Why is that? i looked at the tech demo's of many engines and three.js looked the most promising. Or is there something i missed?
6 )
Is webGL for my project a smart choice? At the end i dont want my game to be playeble on the web. Only for stand alone on pc, linux, mac. (mobile and console if the project succeeds)
7 )
What is the best engine to use for a 2d/2.5d with some nice light effects?
8 )
Does the steam SDK for achievements, joining friends, steam controller etc. work well with webGL?
Thanks for reading, it would mean a great deal to me if you know a answer to one of my questions!

Hey guys. This is a simple RPG demo featuring characters from "Fate/Grand Order" (derivative work), powered by "System Animator 11" (WIP) written by myself.
PLAY: http://www.animetheme.com/system_animator_online/SystemAnimator_online_FGO.html?cmd_line=/TEMP/DEMO/fgo_rpg01
"System Animator" is originally a desktop gadget project, a fully customizable system monitor/music visualizer/animated wallpaper with focus on visuals and fun. It runs on "Electron", which is basically a Chrome browser, and no wonder System Animator itself is basically HTML5. In this upcoming WIP version, I plan to make it fully online and add some gaming features so that it can be used to make browser-based 3D games (mainly RPG for now). If you want to know more about System Animator itself, check out the following page.
http://www.animetheme.com/sidebar/
For more info about the game itself (controls/copyright/license/credits/etc), check out the following README file.
http://www.animetheme.com/system_animator_online/TEMP/DEMO/readme_FGO.txt
The game has only been tested on Google Chrome and Firefox. It doesn't work on Edge right now, but it may work on other modern browsers.
Bug reports and commments are most welcomed.

Hi! First post here.
I'm currently experimenting with the gamepad API and I'm trying to capture the motion/orientation of my dualshock 3 to build controls for a new game project. I was wondering if anyone had manage such a thing and managed to make it work in the browser? It seems that WebVR makes use of the motion data using the gamepad.pose object and so far I've been unlucky to reach it on a regular controller. Has anyone managed such a thing and could help?
Thank you!

I want to make a 3D RPG. I already have the storyline planned out. I also already have a 3rd person camera set up in playcanvas, but since i'm not amazing at programming yet we can use whatever game engine the programmer wants. The RPG will have 4 characters, including the main character. It will have a lot of action elements and I want it to be very atmospheric. I also want the graphics to be low poly. With the programming I can try to help a little, but ill probably wont be of any use.
Thank you and i hope you want to join our team!

I want to make a 3D RPG. I already have the storyline planned out. I also already have a 3rd person camera set up in playcanvas, but since i'm not amazing at programming yet we can use whatever game engine the programmer wants. The RPG will have 4 characters, including the main character. It will have a lot of action elements and I want it to be very atmospheric. I also want the graphics to be low poly. With the programming I can try to help a little, but ill probably wont be of any use.
Thank you and i hope you want to join our team!

www.dogfightx.com
DogfightX Browser 3D HTML5 game, you can play PVP and teamfight. Play with or against your friends and overcome original quests involving fast paced combat, puzzle and skill. No installation required.Survive and shoot at others while trying to keep your own airplane alive!

Hello!
As my profile states I am new here and rather new with Babylon.js as well.
I found its ease of use and performance (over Three.js) good reasons to work on it.
Currently, I have been working on a voxel game (i.e. minecraft-ish) and I have been using Three.js, as there are so many libraries already out there for voxels.
On the other hand, pretty much nothing for Babylon. For this reason, I would like to fill the void and, perhaps, find someone who is interested in helping out on the quest.
I started with creating a small library for creating snow (called `voxel-snow`) and called it `babylon-voxel-snow` (https://github.com/Nesh108/babylon-voxel-snow/). The idea is to make the transition from Three.js to Babylon.js as easy and as painless as possible for people (like me) who have been using it for their voxel projects. Adding the prefix `babylon-`, would make it extremely easy to find the counter part for Babylon.
Here are some other voxel libraries which are currently only in Three.js:
☑ Voxel Snow (https://github.com/shama/voxel-snow) --> Babylon Voxel Snow (https://github.com/Nesh108/babylon-voxel-snow/) ☐ Minecraft skin (https://github.com/maxogden/minecraft-skin) ☑ Voxel walk (https://github.com/flyswatter/voxel-walk) --> Babylon Voxel Player (https://github.com/Nesh108/babylon-voxel-player) ☐ Voxel creature (https://github.com/substack/voxel-creature) ☑ Voxel critter (https://github.com/shama/voxel-critter) -> Babylon Voxel Critter (https://github.com/Nesh108/babylon-voxel-critter) ☐ Voxel builder (https://github.com/maxogden/voxel-builder) -> Unneeded as it can be imported with the Babylon Voxel Critter ☐ Voxel use (https://github.com/voxel/voxel-use) ☐ Voxel mine (https://github.com/voxel/voxel-mine) ☐ Voxel carry (https://github.com/voxel/voxel-carry) ☐ Voxel chest (https://github.com/voxel/voxel-chest) ☐ Voxel inventory creative (https://github.com/voxel/voxel-inventory-creative) ☐ Voxel items (https://github.com/jeromeetienne/voxel-items) ☑ Voxel clouds (https://github.com/shama/voxel-clouds) --> Babylon Voxel Clouds (https://github.com/Nesh108/babylon-voxel-clouds) ☑ Voxel skybox --> Babylon Voxel Skybox (https://github.com/Nesh108/babylon-voxel-skybox/)
As I go, I will try to slowly implement them for Babylon, so hit me up if anyone would like to help out

Hello everyone,I've search for two days for ways to clone a gltf object but no one works. I've trired deep clone the object, but no thing works. It seems like the object only be added once to the glTF render list when it be loaded. The clone body can't be render in screen.
Here is the result of scene.add（obj.clone()）;
`var gltfLoader = new THREE.GLTFLoader();
gltfLoader.load('assets/model/gltf/tree/tree.gltf', function ( data ) {
var gltf = data;
var gltfobj = gltf.scene !== undefined ? gltf.scene : gltf.scenes[ 0 ];
gltfobj.position.z += 5;
gltfobj.name = "tree";
scene.add(gltfobj);
var tree2 = gltfobj.clone();
tree2.position.x+=1;
scene.add( tree2 );
});
`
The cloned object only show shadow in the scene.
I've test the colladaLoader and the daeobject is working well, so now I don't know what is going wrong.So,what should I do to clone it in three js scene?
If anybody can help me?Thanks!

Hey there,
I've recently started to dig my way more into three.js in order to build my own image-viewer-app as my first three.js project.
I'm using three.js r83 and both the EffectComposer aswell as the Shader/RenderPass from the three.js examples. (View on github)
Since I'm familiar with other programming languages I was able to figure out a lot of stuff on my own, but currently I'm struggling with this specific problem:
My App should be able to add post-processing effects to the currently viewed image. The post-processing part already works like a charm, but I would like to add more effects as I want to test/experiment around with some new sorts of possibilities for an image-viewer.
Since I'm obsessed with performance, I came up with some ideas on how to scale the post-processing into different EffectComposers in order to keep weight (Number of Shaders to render) on each Composer low and therefore it's performance high.
What I did: After debugging both the EffectComposer and Shader/RenderPass from the three.js examples, I came up with the idea to render a texture, that I'm able to re-use as a uniform in another Composer later on. This would enable me to encapsulate and pre compute whole post-processing chains and re-use them in another Composer.
While I was debugging through the ShaderPass, I found what I think is the key element to get this to work. I won't post the Code here as it's accessible via github, but if you have a look into the ShaderPass.js on Line 61 you can see the classes' render function. The parameter writeBuffer is a WebGLRenderTarget and, afaik, it is used to store what the composer/renderer would usually put out to the screen.
I've created 2 identical Composers using the following code:
var txt = testTexture;
var scndRenderer = new THREE.WebGLRenderer({
canvas: document.getElementById("CanvasTwo"),
preserveDrawingBuffer: true
});
scndRenderer.setPixelRatio(window.devicePixelRatio);
var containerTwo = $("#ContainerTwo")[0];
scndRenderer.setSize(containerTwo.offsetWidth, containerTwo.offsetHeight);
console.log("Creating Second Composer.");
console.log("Texture used:");
console.log(txt);
var aspect = txt.image.width / txt.image.height;
var fov = 60;
var dist = 450;
// Convert camera fov degrees to radians
fov = 2 * Math.atan(( txt.image.width / aspect ) / ( 2 * dist )) * ( 180 / Math.PI );
var scndCam = new THREE.PerspectiveCamera(fov, aspect, 1, 10000);
scndCam.position.z = dist;
var scndScene = new THREE.Scene();
var scndObj = new THREE.Object3D();
scndScene.add(scndObj);
var scndGeo = new THREE.PlaneGeometry(txt.image.width, txt.image.height);
var scndMat = new THREE.MeshBasicMaterial({
color: 0xFFFFFF,
map: txt
});
var scndMesh = new THREE.Mesh(scndGeo, scndMat);
scndMesh.position.set(0, 0, 0);
scndObj.add(scndMesh);
scndScene.add(new THREE.AmbientLight(0xFFFFFF));
//PostProcessing
scndComposer = new THREE.EffectComposer(scndRenderer);
scndComposer.addPass(new THREE.RenderPass(scndScene, scndCam));
var effect = new THREE.ShaderPass(MyShader);
effect.renderToScreen = false; //Set to false in order to use the writeBuffer;
scndComposer.addPass(effect);
scndComposer.render();
I then modified three's ShaderPass to access the writeBuffer directly.
I added a needsExport property to the ShaderPass and some logic to actually export the writeBuffers texture:
renderer.render(this.scene, this.camera, writeBuffer, this.clear);
//New Code
if (this.needsExport) {
return writeBuffer.texture;
}
I then simply set the needsExport for the last pass to true. After rendering this pass, the texture stored in the writeBuffer is returned to the EffectComposer. I then created another function inside of the EffectComposer to just return the writeBuffer.texture, nothing too fancy.
The Issue: I'm trying to use the writeBuffers texture (which should hold the image that would get rendered to screen if I would have put renderToScreen to true) as a uniform in another EffectComposer.
As you can see in code block 1, the texture itself isn't resized or anything. The used texture got the right dimensions to fit into a uniform for my second composer, however I'm constantly receiving a black image from the second composer no matter what I do. This is the code I'm using:
function Transition(composerOne, composerTwo) {
if (typeof composerOne && composerTwo != "undefined") {
var tmp = composerOne.export();
//Clone the shaders' uniforms;
shader = THREE.ColorLookupShader;
shader.uniforms = THREE.UniformsUtils.clone(shader.uniforms);
var effect = new THREE.ShaderPass(shader);
//Add the shader-specific uniforms;
effect.uniforms['tColorCube1'].value = tmp; //Set the readBuffer.texture as a uniform;
composerTwo.passes[composerTwo.passes.length - 1] = effect; //Overwrite the last pass;
var displayEffect = new THREE.ShaderPass(THREE.CopyShader);
displayEffect.renderToScreen = true;
//Add the copyShader as the last effect in Order to be able to display the image with all shaders active;
composerTwo.insertPass(displayEffect, composerTwo.passes.length);
composerTwo.render();
}
}
Conclusion: To be completely honest, I don't have a clue about what I'm doing wrong.
From what I've read, learned while debugging and from what I've figured out so far, I would argue that this is a bug. I would be really glad if someone could prove me wrong or submit a new idea on how to achieve something like what I'm already trying to do.
If there are any more informations needed to solve this question, please let me know!
Regards,
Michael

I am working for a Wearable Computing and Augmented Reality Startup in Bremen, Germany: http://www.ubimax.de
For improvements of our (PIXI.js powered) Web-Editor that configures our Augmented Reality solutions we are looking for a Web Application Developer (m/f) to join our team in Bremen (job permit for the EU is required).
It says full-time in the job description but students looking for an internship are also very welcome!
We are a team of people from all over the globe, so everyone in our team speaks English fluently but German is a big plus.
The Job description (in German) is attached to this post.
Please apply with your full resume including school and other certificates as well as code examples (e.g. github links) and references to career@ubimax.de .
Feel free to ask me for further details on the job.
162810_Ubimax_Stellenauschreibung_WebApplicationDeveloper.pdf

Hello Freelancers,
My name is Dwayne and I am the technical account manager for Mass Ideation. I actually joined this forum years when I was learning Pixi
but right now I'm contacting you because we are looking for a developer with Pixi.Js and/or Three.Js skills.
It's roughly a one two month project It's due approximately October 7th - 14th. There may be some testing/qa after for about a week.
Its a web app that takes a screenshot of a living room, the scene will be viewable by different camera views, and the users will upload images in this app. We would need to make an editable scene that allows people to pull in images then place them in the scene. Users can select from a variety of Christmas trees, pick and place photos (as ornaments), customize the background/ setting, add a wreath on the door, and even upload a family photo to hang on the wall.
If you are interested, please fill out this survey with your skills and hourly rate http://join.massideation.com/
Best,
Dwayne

I have absolutely nothing to do with the following code, I just thought I wanted to share it, and I was slightly unsure as to which section this was best suited.
https://github.com/mfosse/multiplayerFramework
it.s 1.6GB with models, sounds etc.
http://f1v3.net/mmo/
I haven't been able to load the hosted game, though, so I guess something is down.
I just wanted to share it, as it seems to implement an authoritative model, with physics and hit detection done on the server. And I'm pretty sure this is something quite a few users are looking for, based on threads on the forums.
The server uses three.js too, but that's only for some basic vertex manipulation for the cannon heightfield I guess.
I have trouble installling canvas on node, as a Lot of people seem to have, so I haven't tested it yet.

I started an app about 4 months ago in three.js as a proof of concept.
Its not a game but a ship yard inventory system. It maintains the position of over 20,000 containers in 3 basic sizes, and reflects the movement of over 200 vehicles with 5 different types. The prototype app positions the vehicles and the moves the containers around the yard.
The vehicles and containers are using models in the js format. The containers are wrapped in about 15 different textures that have the company logo in 3 basic sizes and along with a couple default textures when we don't have the a matching company logo.
Now the problems.
With Three.js I have noticed massive memory leak that I think is caused by the adding and removal of containers. For performance I placed all the containers in just a few concatenated pieces of Geometry to reduce the draw calls. The vehicles I left alone. To remove a container I modify the geometry on the fly. That being said, not sure if I should even do it that way and no place just to ask questions like can I do this, or does this work for three js. The other issue I have is it seems to be changing versions with internal alterations. As my code becomes larger modifying it becomes more difficult.
Then I did a search on three.js vs ? and found Babylon read a few posts about it being more industrial strength and seeing a board that actually discusses the language I am more intrigued.
Does it make sense to change? Is the language similar or different. I saw the multi-thousand spheres but they had no textures is it possible to apply image textures from an array. Does babylon have a js loader for models and png loaders for textures. Is there light that is like sunlight?
Feel free to ask questions etc.
thanks