Website URL

Twitter

Skype

Location

Interests

Hey guys! I've got one main question, with a few follow ups.. here goes I was wondering if there was a way draw an image onto the PIXI.Graphics (object thingy)? I'm aware I can add sprites to the stage and other containers. But I'm currently drawing polygons, and images (with a dynamic render order), So this seems like a good way to do that. Is there a way of doing this, similar to the plain/vanilla canvas way: var canvas = Dom.get('canvas');var context = canvas.getContext('2d');context.drawImage(source, x, y, w, h, ...); I've tried: var context = pixiRenderer.context;.. but this only returns the following: CanvasRenderingContext2D {} And now for the follow ups: Is the context (2d), unique to the Canvas? Would drawing imaged to the graphics object limit me to the CanvasRenderer, or could I still use PIXI.autoDetectRenderer and PIXI.WebGLRenderer ? Thanks in advance!

Is there any way to render the tilemap layers and get the image? I just want to do a static minimap (it will not display changes in the world).
Initially, I tried to do this with a camera that looks at the game world with the ignoring of unnecessary objects. It turned out, but so-so, you need to carefully select the scale that would not be render artifacts.
At the moment, the mini-map is in another scene and accordingly it does not have access to the render of another scene, i.e. option with the camera does not work.
So I tried using the built-in renderer and its screenshot function. As I did, at the initialization stage of the map, I create an additional game instance with the world size equal to the minimap (in other words I make the game in the game) and try to make its screenshot.
At the result it comes with base64, but in my case it is invalid, and the expected size should be an order of magnitude greater. I suspect that because of the async of both the initialization of my map and the map itself in the map, this does not happen until the end.
GameInGame code:
const startMiniGame = tileMap => {
const factor = 10;
function preload() {
this.load.image('tiles', tilesheet);
}
function create() {
const mapData = Tilemaps.Parsers.Tiled('map', tileMap, undefined);
this.map = new Tilemaps.Tilemap(this, mapData);
const { widthInPixels, heightInPixels, tileHeight, tileWidth } = this.map;
this.tiles = this.map.addTilesetImage('maptile', 'tiles', tileWidth, tileHeight, 1, 2);
MapService.Layers.forEach(([layer]) => {
this.map.createDynamicLayer(layer, this.tiles, 0, 0);
});
this.cameras.main.setBounds(0, 0, widthInPixels / factor, heightInPixels / factor);
this.cameras.main.setZoom(factor);
}
return {
type: Phaser.CANVAS,
width: tileMap.widthInPixels / factor,
height: tileMap.heightInPixels / factor,
scene: {
create,
preload
}
};
};
And initialization:
// method of MapService
createMiniMapSnaphot = tileMap => {
// there I can catch error event then.
// so, game.renderer.snapshot works but gives something wrong
this.scene.sys.textures.on('onerror', (...args) => {
console.error('onerror', ...args);
});
const game = new Phaser.Game(startMiniGame(tileMap));
game.renderer.snapshot(image => {
this.scene.sys.textures.addBase64('s', image);
});
};
Perhaps someone who has encountered such a task or has some ideas?

Hi,
have some question regarding `alwaysSelectAsActiveMesh`.
was experimenting with `alwaysSelectAsActiveMesh` and saw that in `_evaluateActiveMeshes` there is check for active meshes:
if (mesh.alwaysSelectAsActiveMesh || mesh.isVisible && mesh.visibility > 0 && ((mesh.layerMask & this.activeCamera.layerMask) !== 0) && mesh.isInFrustum(this._frustumPlanes)) ...
Any reason why `isVisible` and `visibility ` is not checked before `alwaysSelectAsActiveMesh` ?
Like this:
if (mesh.isVisible && mesh.visibility > 0 && (mesh.alwaysSelectAsActiveMesh || ((mesh.layerMask & this.activeCamera.layerMask) !== 0 && mesh.isInFrustum(this._frustumPlanes))) ...
It have some specific semantics ? As for me if mesh is not visible it should not be rendered

Hi all,
I need to display a large image and dynamically hide sections of it.
I already use masking to specify which parts should be displayed.
The code I use to achieve this is in the form:
this._topMask = game.add.graphics(0, 0);
this._topMask.drawRect(0, 170, game.width, vm.ActiveGameHeight - 320);
this.defaultGroup.mask = this._topMask;
This is useful in that it gives me a rectangle within which the image renders.
Now I need to dynamically block sections of that rectange and create areas that are not rendered.
Ideally, I would like to be able to create masks similar to the image below. I cannot use presaved images because the position of the rectangles change dynamically.
What is the most performance optimised way to achieve this in Phaser? (I am on version 2.4.4)

Hello everyone!
i have a very specific question, i need to somehow pause the game at the end of the frame render, execute a function and then render the next frame, and do the same over and over, is that possible? if there is no function that would allow to do that, what is the function that takes care of the scene rendering so i could modify it?
Thanks!

Hi all,
We've published a babylon js based product configurator for a client here :
https://v-moda.com/pages/forza-metallo_customizer
it works perfectly everywhere but on my brand new galaxy S9 the scene renders like this :
what could be the cause of it ? any ideas ?
Thanks !
PastedGraphic-2.tiff

The webivew which use pixijs will crash on iOS when the app goes background, something like: _gpus_ReturnNotPermittedKillClient.
So, we have to pause the webgl render. There is lockRender in Phaser. Is there a similar method in Pixijs to pause renderer? Thank you.
updateRender: function (elapsedTime) {
if (this.lockRender)
{
return;
}
// ...
}

Hi everyone,
I've just signed up to this forum and I want to show you my new game made in Consturct 2.
This game is not 100% finished ( maybe 95%) so there's a few bugs.Also ignore the "audiojungle" sounds.I haven't bought the sounds yet.
Any comments and questions are welcome.
Play the preview here : http://www.actionoyun.0fees.us

I have an mp4 video file (H264 encoded).
I want to create a PIXI.extras.AnimatedSprite from this video to be able not lagging reversed animation play;
If I splice this video to image sequence the size becoming huge (about 8Mb vs 500kb of mp4).
I want to load low size mp4 file and extract image array from it to create an Animated Sprite.
Reverse playing directly from <video> by changing video.currentTime is lagging. I want to create something like hover/unhover animation.

Hello, is there a way to advance a VideoTexture by just 1 frame per RenderLoop?
As I understand it now it is time based. This creates the problem that if the frames need to be captured to create an animated sequence, the video keeps advancing in between 2 captured frames.

Hi everyone
I read this about gltf : https://pissang.github.io/qtek-model-viewer/ and test it with "Adam head " : https://sketchfab.com/features/gltf
The render system is the same than Sketchfab for post processes.
It's really cool because when there is no animations, the engine calculate step by step a better render quality.
Post processes like ambiant occlusion look really great and not drop fps.
This allow us to run scene with 60fps with animation and to return a beautiful image when all meshes are fixe.
All post processes and shadows seem to be calculate with a rough noise at the beginning and with more precision at the end. ( It's look like a trick to get more performance ...)
So, I open this post to discuss about that and to see if it will be a great idea to add it in Babylon
What do you think ?
Have a nice day !
PS : I ask this because I'm working on this :

Hi,
My query is this: I use the Cocos creator tool to position a text. When you export the position of this element and then integrate it into my phaser game, it happens that the selected position is slightly different from what is previewed in the Cocos tool.
I suspect it is due to how both tools draw the font in their respective container. As you can see in the attached images, in the tool the text is centered in its container, however, in Phaser, when debugging the edges of the text, you can see that the font is positioned upwards.
In short, both containers are in the same position but the texts are drawn in different positions.
Is there any way to modify how Phaser draws the text in his container?
As additional information, both texts are centered and added to a sprite as a child.
The way the text is positioned varies according to the font used.
Cocos tool
Phaser:

Hi,
I'm a actually making a game using PHASER. On this game, player have to jump to escape balls rolling on the ground, like on an infinite runner. Actually, everything is working fine but the player is showed using a spritesheet divided on 128*128 resolution. The character animation running doesn't use 128 pixels on width causing the ball to collide with the player even if the player doesn't hit the ball. The player hitbox is actually configured to 128*128 but I know that in previous versions of PHASER, it was possible to render player to make his hitbox scale his skin.
I didn't find this function in the actual version of PHASER. Can someone help ?
Thanx in advance.
For information :
Phaser example on previous versions :

Hello,
I'm doing some stress test with PIXI, and it seems that PIXI draw all the element of the stage, it does not matter if they are in range off the screen or not.
I'm trying with 300 circles and even if I move the layer to coordinates very far from the "screen view" (x=10000,y=10000), it still processing them all and give me very low FPS
There is any way to process only "inrange" objects to save CPU/GPU usage?
It is a Normal Draw (drawCircle) slower or heavier than an Image?
Thanks

I'm trying to make game resize appropriately to size of the window/screen.
Everything's working out great, except for the tilemap. It seems like the rendering bounds are not updated.
(typescript)
onResize() {
this.game.scale.refresh();
var newWidth = window.innerWidth / 3;
var newHeight = window.innerHeight / 3;
this.game.scale.setGameSize(newWidth, newHeight);
this.game.scale.scaleMode = Phaser.ScaleManager.SHOW_ALL; //need to call this to apply new size?
this.game.camera.setSize(newWidth, newHeight);
for (var i = 0; i < this.tilemap.layer.length; i++) {
this.tilemap.layer[i].width = newWidth;
this.tilemap.layer[i].resizeFrame(this.tilemap, newWidth, newHeight);
this.tilemap.layer[i].crop(new Phaser.Rectangle(0, 0, newWidth, newHeight), false);
this.tilemap.layer[i].updateCrop();
}
this.game.camera.follow(this.player.sprite, Phaser.Camera.FOLLOW_TOPDOWN, 0.8, 0.8);
}
As you can see I've tried everything, I would've expected resizeFrame or crop to do something, but unfortunately they don't.
Do note that the rest of the game updates the size correctly, as the fish get rendered in the widened area just fine.
Any ideas how to update the tilemap to the new size?
Thanks!

https://www.babylonjs-playground.com/#4HUQQ#207
Hello again,
I'm trying to figure out how to check for collisions using intersectsMesh method.
The problem is that when I start my scene, the intersectsMesh callback is called since I believe the meshes start at the origin point of the world on the first render before they are moved.
The PG above demonstrates the problem. If you open up the browser's developer console, you would see "SHOULDNT HAPPEN" when the mesh intersections occurred. Any tips on how to prevent this problem? Thank you!

Hey Guys, I'm currently moving my camera around a series of sprites but the movement is painful to look at and jerky when moving the sprites and especially with higher speeds, is there anything you can do to fix/help the lag/poor performance as it isn't really usable as a build in this current state.
I also as @Wingnut said have the issue that the transparent background is still clickable.
Here is my current playground: https://www.babylonjs-playground.com/#41N19L#3
@Deltakosh @Wingnut @JohnK Any incites Guys?

My earlier question has dissapeared or been deleted. Let me try again. Can I run babylonjs in server mode similar to UE4 server mode where no rendering and other processing required but only the game and actor states.

My issue here is I'm trying to create a countdown timer using fonts and styling of my choice but I don't know how to update the text variable. I want to call the addChild method once and have the value of that string updated in a function. The only way I can see the updated countdown is with the addChild method but at the moment that leaves duplicate copies of the object on the screen. The render does not clear the canvas even though I read in the documentation it's set to do that by default. I tried two methods to get the desired results:
var countdown = new PIXI.Text( days+" days "+hours+" hours " +minutes+" minutes "+seconds+" seconds ",timerStyle);
countdown.updateText();
timerContainer.addChild(countdown)
//tried a second method and that was use to create a sprite from the text
textSprite = new PIXI.Sprite(countdown.texture);
timerContainer.addChild(textSprite);
I did some extensive google searching and I learned that you need to use the function .updateText(); in order for the text variable to actually update. I thought this would work but it does nothing to change the string value on the screen. I have also tried to declare and add the countdown variable to the stage, outside the function but the text does not update.
Any suggestions would be greatly appreciated.

I am confused with the two main methods to animate objects in pixi.js.
The first one is to build a sprite and put it in a PIXI.Container. I can then add this container as a child of the PIXI stage. Every tick of the animation I just need to update the sprite's position and PIXI will handle the rest for me.
I can also build a sprite or a displayObject and create a RenderTexture which is about the same size of what I want to render. Then I use
renderer.render(DisplayObject, RenderTexture);
to render it on the screen. This works fine too. I still need to update the position on every tick so the displayObject will animate.
But what is the difference between using a Container and a RenderTexture? There must be a reason we have two options to render something. What use cases are these two methods best fitted in?

Hi guys ,
I'm fairly new to babylon and this forum.
But I already like it a lot. Thanks for the knowledge that is shared over here!
I'm trying to build my first little babylon experiment with a friend and we ran into a problem which we don't know how to solve.
It's a small scene with a couple of skeletal animations. You can find over here: http://www.somewhere.gl/beta/AnimTest_v06.html
When its done it should be looking like this, ...Reference Frame
The meshes getting rendered in a funky manner/order We think it has something to do with the alphaIndexes and all our non-opaque meshes. We already searched for it an read this article: about how things are rendered, but we still don't know the best way to fix this:
Some advice, best practices would be very welcome.
THANKS IN ADVANCE
flo
PS.: Had some trouble with building a working playground. (It always said that I need to have a camera.)
-- A LOT OF CODE WAS POSTED HERE --