Everything posted by Sirisian

This video contains a rough overview of how Microsoft does it. There are no complete libraries for this as far as I'm aware. It's possible there are papers though on each individual technique, but the process is essentially a pipeline of different point cloud and mesh algorithms. (This problem seems extremely complex by the way when it comes to creating keyframes and delta frames. Microsoft might have broken new ground when performing this research).

Would this happen to be a normal uniform grid? I wrote a blog article years ago that covered the grid and spatial hashing methods. Might compare the data structures I use against yours.
A big picture idea of interest management is that you don't have to run it in real-time. As Frob mentioned loose variations will often be used that exploit this. If designed correctly you can update entities in cells, quadtrees, etc like once per second then query and build the list of nearby entities every 2 seconds or stagger updates across many ticks. This can get kind of complex, but in the end you can end up with a system that supports thousands of entities and scales. (Overkill usually).

Not sure if anyone else was planning to use Skia for text rendering, but they confirmed they have a Vulkan backend in the works. Kind of surprised since I thought I'd be waiting months or have to write my own backend.

From an image processing standpoint you should be able to use a gradient technique. https://en.wikipedia.org/wiki/Image_gradient I'd use a 3x3 kernel to start with and maybe combine a larger kernel to generate a gradient map. Post process the gradient to smooth it then apply the changes to the grayscale image. Using this you can even increase the resolution.

I should have been more clear. I can probably write an iterative solution, but I'm more concerned with a closed form solution. (Since I've shown a partial solution can be calculated already for most cases).
edit: Someone solved it on stackoverflow. Interesting.

The surface in yours looks like a large ocean of jello. The primary reason for this is you stretch and compress the whole mesh even the finest details. From watching videos of oceans waves the high noise and slow moving froth stay in place and move much less than the low frequency waves and crests. When watching videos it's like you start with a large taunt blanket then you have high frequency noise that moves very little in relation to the lower frequency waves. As you add lower frequencies of waves the movements go up and they affect each other more. Also low frequency waves crest. The lower the frequency (larger) the wave the more foam. You lose a lot of detail without this.
Also for more realistic rendering (which adds a lot to the overall effect) you might want to render your objects first to a texture and extract a depth map. Then use that when rendering the water and take the difference between the depth values. Using something like beer-lambert for the depth based transparency would add a lot. (I'm sure there's a paper with a more accurate volumetric transparency though for the water).

This should probably be in the Multiplayer and Networking section. There are a few ways to handle what you're describing.
Assuming you have a hash grid with uniform cell size then you just need to use the player's current camera on the server to collect all the entities in range. That just requires iterating through the cells and performing a look-up in the hash grid. Then for all the cells that exist in the hash grid union the entities into a list. This list represents all the entities the client can see. I have a tutorial on spatial partitioning and the queries here. That said the basic query function would look like this in C#:
public HashSet<IGridEntity> Query(AxisAlignedBoundaryBox aabb)
{
var entities = new HashSet<IGridEntity>();
var startX = (int)(aabb.MinX / CellSize);
var endX = (int)(aabb.MaxX / CellSize);
var startY = (int)(aabb.MinY / CellSize);
var endY = (int)(aabb.MaxY / CellSize);
for (var x = startX; x <= endX; ++x)
{
for (var y = startY; y <= endY; ++y)
{
Cell cell;
if (grid.TryGetValue(y * dimensions + x, out cell))
{
entities.UnionWith(cell.Entities);
}
}
}
return entities;
}
Now once you have all the entities you have to tell the player about them. In the past I've advocated using a full and delta state pattern. That is for every client you store an array of all the entities you've told the client about and then on every message after that you simply tell them what's changed. If you do that then you'll have an array that's empty when the client connects. We can refer to this array as the "known entities" array.
Depending on how you're building packets you'd want to add all the information about the new entities that aren't in the array (so that the client can recreate it). This would include things like the entity type. You're right that you'll want to give all the entities a unique id. You'll use this id to refer to the client entity later. If the entity is already known by the client (it's in the known entities array) then you just need to send a delta packet with things that have changed, like position. You can use bools to track changed items in the entity then after sending data to all the clients just set the bool to false.
That's all there is to it. Every tick of the server query for the entities in the camera then check the known entities and either add new entity data to the packet or write a delta entity data. The client has its own array of known entities so if it receives an entity id it's never seen it knows it needs to deserialize the full state and if it already knows about the entity then it'll deserialize a delta for the entity.
The two tutorials I linked go into things a lot more including how to forget entities. There are a lot of optimizations, but this is a fairly simple technique especially for a lock-step game. (It forms the basis though for a lot of technique for drastically reducing bandwidth though).

In regards to WebSockets I personally wouldn't bother with socket.io. You can just use the ws module for games. Falling back isn't necessary anymore for any browser or device. (Nor for a game is it really worth it).

I've always found the CSS documentation on MDN to be a nice overview of timing functions: https://developer.mozilla.org/en-US/docs/Web/CSS/timing-function
Also check this out: http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-Approach (It starts at movement continued with a full animation system defined by curves)

Yeah that's how I kind of figured and set it up my code dump above. Specifically I did:
hr = mD3D11On12Device->CreateWrappedResource(mTextTexture.Get(), &mD3D11TextureFlags, D3D12_RESOURCE_STATE_RENDER_TARGET, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, __uuidof(ID3D11Resource), (void**)mD3D11Texture.GetAddressOf());
then
ID3D11Resource* resourcesToAcquire[] = { mD3D11Texture.Get() };
mD3D11On12Device->AcquireWrappedResources(resourcesToAcquire, ARRAYSIZE(resourcesToAcquire));
// SetTarget BeginDraw EndDraw
mD3D11On12Device->ReleaseWrappedResources(resourcesToAcquire, ARRAYSIZE(resourcesToAcquire));
I use it in an SRV thus the out state is pixel shader resource. That does clear up any doubts I had about the usage.
I think I need to find someone with very in-depth knowledge to look at the code to find the specific issue. I've switched gears and am learning other D3D12 stuff to see if maybe I learn something relevant to help.

I've done about as much as I can to simplify my code examples for someone else to help. "D3D12-DirectWrite.zip" has an implementation that I believe should be valid. All it does is create a ID3D12Resource and then uses it as a render target that D2D and DirectWrite to render to. It calls Render() one time to render a single frame. D2D never renders to the texture though and D3D12 renders a black texture.
My current issues are in understanding ID3D11On12Device's CreateWrappedResource method. It has an "InState" and "OutState", but it's unclear to me how they should be set. They AcquireWrappedResources and ReleaseWrappedResources also have no clear usages explained other than they seem mandatory? I assume you have to acquire the texture then render then release it when you're done, but it's not clear when this needs to be done in my code.
The last piece I'm confused about are if D3D12 has to do anything when working with D2D. D3D11On12CreateDevice accepts a ID3D12CommandQueue. I'm assuming it doesn't require D3D12 to do anything and that D2D will set up its own command list internally and submit it to the command queue and execute it in D2D's EndDraw method. I can't find any confirmation about this though.
I've attached another example which I do not understand at all. "D3D12-DirectWrite-DrawsText.zip" is a program that draws text. I was copying code around and suddenly it started rendering. It's so odd though. It only renders after a few frames though. If you change the code to call "Render();" just once it renders a black square. I assume whatever it's doing is very undefined, but it shows that somehow, someway, that D2D and DirectWrite can be made to work with D3D12. I just can't find the right way.
What I want is essentially contained operations.
SetupEverything();
RenderTextToTexture(); // D2D and DirectWrite have completely finished rendering to a texture
Render(); // Draws a single frame with the texture rendered.
If someone can look at the code I'd appreciate it. I have very little DirectX experience so it's possible I'm doing something silly and unrelated wrong. Or if someone can find someone at Microsoft to look at it and write a real example that would be nice since I'm not sure if what I'm seeing is a bug. I've spent way too long on this.

Is the VS graphics analyzer supposed to work right now? I can run the diagnostics tools and take frame screenshots, but if I try to view the frames it opens the analyzer and then a window displays "Visual Studio Graphics Analyzer has stopped working". Is there any information on when this feature will be available/fixed?
Ah VS RC has a working graphics analyzer. Ooh, I got text to render. I think I figured out my bug finally! Graphics analyzer crashes though if I try to take a picture of the frame. Probably doing something wrong still though. More investigation is needed.

Okay been noticing something. Is AcquireWrappedResources and ReleaseWrappedResources actually implemented? The documentation isn't very clear on if I'm using them right. https://msdn.microsoft.com/en-us/library/dn913197%28v=vs.85%29.aspx I think I'm using it right. You acquire the D3D11Resource then render and it's like rendering to the D3D12Resource? Doesn't seem to matter though. It's like the two functions do absolutely nothing currently since the D3D12Resource is still black. It's almost like when you call CreateWrappedResource it gives you a new separate resource that doesn't affect the D3D12 resource. If anyone has access to any documentation or secret information they could hint at I'd be really grateful. I've tried everything I could think of.

Yes, IDXGIDevice interfaces are no longer supported with Direct3D 12 (that's not a big secret as you found out about it!).
If you look in the SDK you will find an header (and its .IDL) called "d3d11on12" and if you open it you will find what this new API is meant about: interoop between Direct3D 12 and Direct3D 11 (also not another big secret since it states in the API comments).
Darn. Yeah I commented about that the other day. Got the impression that d3d11on12 was the only way, but noticed the only missing piece was the IDXGIDevice. Seemed weird to have to use a whole separate API as a workaround. I have it partly coded though. Thanks for the clarification.
Actually not too bad. Just needed this I think:
const const D3D_FEATURE_LEVEL featureLevels[] = { D3D_FEATURE_LEVEL_11_1 };
IUnknown* commandQueues[] = { mCommandQueue.Get() };
Microsoft::WRL::ComPtr<ID3D11Device> mD3D11Device;
Microsoft::WRL::ComPtr<ID3D11DeviceContext> mD3D11DeviceContext;
hr = D3D11On12CreateDevice(mDevice.Get(), D3D11_CREATE_DEVICE_SINGLETHREADED | D3D11_CREATE_DEVICE_BGRA_SUPPORT, featureLevels, ARRAYSIZE(featureLevels), commandQueues, 1, 1, mD3D11Device.GetAddressOf(), mD3D11DeviceContext.GetAddressOf(), nullptr);
This returns S_OK.
It works! Then also you need to do:
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
hr = mD3D11Device.As(&dxgiDevice); // S_OK
Microsoft::WRL::ComPtr<ID2D1Device1> mD2DDevice;
hr = mD2DFactory->CreateDevice(dxgiDevice.Get(), mD2DDevice.GetAddressOf()); // S_OK
I'll have a full example when I'm done understanding and learning. I don't want to give out bad code examples since I've never really used D3D before this point.

Okay almost have DirectWrite working and Direct2D. All the code is done from what I can tell. One big issue though and I fear this is a critical bug still. You can't get a IDXGIDevice from a ID3D12Device?
Microsoft::WRL::ComPtr<ID3D12Device> mDevice;
hr = D3D12CreateDevice(.., D3D_DRIVER_TYPE_UNKNOWN, D3D12_CREATE_DEVICE_NONE, D3D_FEATURE_LEVEL_11_1, D3D12_SDK_VERSION, __uuidof(ID3D12Device), (void**)&mDevice);
Microsoft::WRL::ComPtr<IDXGIDevice2> dxgiDevice;
hr = mDevice.As(&dxgiDevice); // E_NOINTERFACE No such interface supported.
As you may know the alternative code, shown in the documentaiton, when not using ComPtr is:
hr = mDevice->QueryInterface(__uuidof(IDXGIDevice2), (void**)&dxgiDevice); // E_NOINTERFACE No such interface supported.
This produces the same error with IDXGIDevice, IDXGIDevice1, IDXGIDevice2, or IDXGIDevice3.
Since getting a valid IDXGIDevice is required to create a ID2D1Device1 - using the ID2D1Factory2 - it's impossible to use DirectWrite easily with D3D12 as far as I can see. Is there something simple I'm missing? Did something change with how you go from D3D12Device to IDXGIDevice2?

Just so I don't waste time. What's the easiest way to use DirectWrite with D3D12? The current way that seems viable is D3D11 with Direct2D to use DirectWrite to then use ID3D1211On12Device? This seems slightly convoluted? Am I missing something obvious?