Macropolygonhttps://macropolygon.wordpress.com
Simon's graphics musingsThu, 09 Mar 2017 23:18:54 +0000enhourly1http://wordpress.com/https://s2.wp.com/i/buttonw-com.pngMacropolygonhttps://macropolygon.wordpress.com
Evaluating Debugged-Process Memory in a Visual Studio Extensionhttps://macropolygon.wordpress.com/2012/12/16/evaluating-debugged-process-memory-in-a-visual-studio-extension/
https://macropolygon.wordpress.com/2012/12/16/evaluating-debugged-process-memory-in-a-visual-studio-extension/#commentsSun, 16 Dec 2012 20:51:07 +0000http://macropolygon.wordpress.com/?p=24Continue reading →]]>I’ve recently been working on a Visual Studio extension that acts a bit like the Visual Studio memory window, but visualises the memory as a texture. Like the memory window, there’s a box for the user to type in an address expression. The plug-in then needs to evaluate the expression in the context of the debugger’s current thread and stack frame, and then retrieve the contents of the debuggee process’s memory space at that address.

This is a relatively common operation for the debugger — it’s done by the memory window, as well as the watch windows, of course. I therefore assumed it would be fairly straightforward. I was very wrong. After much hair-pulling, however, I was finally able to get it to work, so I thought I’d document it here in case anyone else needs to do something similar.

It’s worth noting that the plug-in I’m writing is written in C#, so all the code snippets will be C# too. The VSX API is COM, though, so the steps should be basically the same whatever the language. And I’m targeting VS 2012, so all of this may or may not work with other versions. The API doesn’t look to have changed between 2010 and 2012, so there’s a good chance it will work there, but I haven’t tried it.

Evaluating Expressions

The Visual Studio automation API provides a way to evaluate expressions in the current stack frame (EnvDTE.DTE.Debugger.GetExpression), which is of course exactly what we want. However, the returned type doesn’t allow direct access to the debuggee’s memory, just the evaluated value as a string.

So, we need to use the full ‘package’ VSX API. The main interface that represents a property in the debugger in the VSX API is IDebugProperty2. Once you have one of these, you can fairly easily use it to get the contents of the debuggee’s memory. To get an IDebugProperty2, you can call IDebugExpression2.EvaluateSync. You can get an IDebugExpression2 by calling ParseText on an IDebugExpressionContext2 interface. And you can get one of those by calling GetExpressionContext on an IDebugStackFrame2. So the challenge becomes to get an IDebugStackFrame2 presenting the ‘current’ stack frame — i.e. the one that is used to evaluate watch expressions.

Registering for Debug Events

Despite much searching, I couldn’t find a way in the VSX API to directly get the current stack frame (or the current thread, or even the current process, for that matter). However, there is a way to register for debug events, which enables the caller to receive a callback when the state of the debugger changes, including, for example, hitting a breakpoint. And the callback takes as parameters various interfaces representing the new state of the debugger.

This is achieved by implementing the IDebugEventCallback2 interface, and then passing it to IVsDebugger.AdviseDebugEventCallback. This method takes an object (IUnknown in the underlying COM gubbins), but it will magically be cast to an IDebugEventCallback2 underneath. Needless to say, this took me some time to realise! Anyway, here’s some code that does this.

The debugger indicates an event by calling the Event method of the IDebugEventCallback2 interface. The parameters include the GUID of the event that has fired, along with a load of interfaces representing the state of the debugger. I needed an event that gets fired when you hit a breakpoint (or the debugger breaks for some other reason, like an exception), and also when the user changes the stack frame or thread. Fortunately there is one event that covers all these scenarios. Its GUID is {ce6f92d3-4222-4b1e-830d-3ecff112bf22}, but I have no idea what event it represents — I could find no mention of it in the docs or on Google. But it gets fired at just the right time, so I’m not going to lose much sleep over that!

Getting the Current Stack Frame

Unfortunately, the parameters to IDebugEventCallback2.Event do not include the current stack frame (that would be too easy!) It does give you the current thread (as IDebugThread2), though. With this, you can enumerate all the stack frames for that thread using the EnumFrameInfo method. This still gives you no information about what the current stack frame is, however.

Fortunately, we can get this information from the automation API. EnvDTE.DTE.Debugger.CurrentStackFrame returns a StackFrame object. This in and of itself isn’t much use to us, but the object is actually an instance of StackFrame2 (obviously!) So, casting to this type, we can get its Depth property, which indicates how deep in the stack the current frame is. It’s worth noting that the top frame has a depth of 1, not 0. Here’s some code to do all that.

So, we now have a way of enumerating the stack frames of the current thread, and we know the depth the current frame, so it’s simple to put these two together and get the current stack frame. Here’s a bit more code that does just that.

Other Links

]]>https://macropolygon.wordpress.com/2012/12/16/evaluating-debugged-process-memory-in-a-visual-studio-extension/feed/4macropolygonStochastic Rasterization and Deferred Renderinghttps://macropolygon.wordpress.com/2010/08/26/stochastic-rasterization-and-deferred-rendering/
https://macropolygon.wordpress.com/2010/08/26/stochastic-rasterization-and-deferred-rendering/#respondThu, 26 Aug 2010 11:27:23 +0000http://macropolygon.wordpress.com/?p=17Continue reading →]]>After discussion with Repi, and reading the recent Decoupled Sampling paper, I’ve been thinking about how deferred rendering techniques can interact with stochastic sampling for defocus and motion blur. We all know that deferred techniques don’t play very nice with MSAA, but the issues are generally solvable for a couple of reasons:

The number of samples is usually small (2 or 4, typically)

The number of pixels that need multiple samples is relatively low (i.e. only edge pixels)

The mapping of shading samples to visibility is straightfoward – the shading sample is the pixel that contains the visibility samples.

Stochastically-sampled defocus and motion blur blow all of these out of the water.

The mapping from shading samples to visibility samples is far from trivial.

So, the blunt answer to “how does stochastic sampling interact with deferred techniques” is “it doesn’t.” And by “deferred techniques” I don’t just mean deferred shading. I mean anything that uses existing scene contents to alter the scene outside of the main render pass which aren’t a pure post-process, inlcuding things like deferred fog, soft particles, projected decals and SSAO. The last one’s a bit of a kicker, since there’s no way to do it in anything other than a deferred fashion.

The problem is that once the render target has been resolved to pixels, non-colour values no longer make any sense. This is the exact sample problem as when using MSAA, but everywhere. If a fast-moving foreground object moves across a distant object, then the depth value for pixels covered by the blurred object are somewhere inbetween the 2 objects, which is clearly not usable. Just taking the nearest depth is not a solution, especially in cases where most samples come from the “far” object, as the motion blur will leave a shadow of incorrect shading.

The current solution for this is to perform calculations at sample frequency for pixels that need it. However, this isn’t really feasible in this case, since (a) all pixels need it and (b) there’s a huge number of samples.

I’ve been trying to think up ways around this problem, related to the idea of decoupled sampling. However, I’ve not made much progress, to be perfectly honest. You could try storing a separate render target that is effectively a full screen version of the shading space described in the paper. This would store information for an unblurred/aliased version of the scene, which could be mapped to-and-from the final version in the same way that the paper does as shading time. Firstly, however, that mapping is not trivial, so would have to be stored somewhere during construction. More critically, though, failing the depth test in this unblurred version of the scene does not mean you failed the depth test in the final version. And worse still, the number of fragments from a given pixel in the unblurred version that map to contributing samples in the final version is unbounded — consider n pixel-sized balls all in the same position at time t=0, but spread out in multiple directions over the shutted interval such that they all contribute at least one sample to the final image; there is no theoreitcal limit on n.

So, this implies you need some form of list at each pixel of shading samples that contribute to the final image, and the location of the samples that they contribute to. Which is all beginning to sound (a) complicated (b) messy and (c) slow.

So, assuming someone cleverer than I doesn’t come up with a solution to the problem, this rather unfortunate conclusion seems to suggest that, if we want stochastic sampling for defocus and motion blur, we may need to sacrifice some of the techniques that we have come to rely on in games over the last few years. That, or we’re stuck with post-processed depth of field and motion blur for a while longer, which isn’t a very happy thought.

]]>https://macropolygon.wordpress.com/2010/08/26/stochastic-rasterization-and-deferred-rendering/feed/0macropolygonAPB Customisation Presentationhttps://macropolygon.wordpress.com/2010/08/12/apb-customisation-presentation/
https://macropolygon.wordpress.com/2010/08/12/apb-customisation-presentation/#respondThu, 12 Aug 2010 19:50:12 +0000http://macropolygon.wordpress.com/?p=6Continue reading →]]>This is the first in what will hopefully become a series of posts detailing some of the techniques and ideas that we used in making APB. It’s also the easiest one to write, since it’s already done!

My boss Maurizio Sciglio and I gave a talk at GDC 2010 about APB customisation system. The slides and video are available at the GDC Vault (http://www.gdcvault.com/), but you need an account to get the video, and the slides by themselves are a bit spartan, as I don’t like putting too many words on my slides. So, I thought I’d post the Powerpoint originals here, which have a bit more detail in the notes section.

Maurizio & I are happy to answer any questions about the systems decribed within, so feel free to ask in the comments or via email.