Rather than searching for the VS .gitingore file, use https://gitignore.io. This website lets you pick tags, such as visualstudio, and generate a .gitignore file you can download. This is based off the same .gitignore files in the GitHub repo you find with your search, but this is a much better way to work with it.

While I've got a few GitHub repos, I don't do much collaborating where I'd deal with pull requests. So, all I know about this is what little was demonstrated here. But, man it looks nice. Why isn't there an equivalent for VSTS? I'd love to be able to work with pull requests on VSTS inside Visual Studio rather than using the web interface.

Constructors should just initialize an object, yes, but that doesn't mean they shouldn't ever create a task. It's unusual, but not wrong.

As I pointed out, there's more to observing tasks then just ensuring you handle exceptions, and there's plenty wrong with using Wait(), Results and kin in async code. Google "async all the way down" for deep discussions on the topic.

ConfigureAwait isn't tricky, and it should always be used when you have no need to sync back with a SynchronizationContext (such as needing to get back to a UI thread). In fact, failure to do so can actually result in deadlocks.

I'm not 100% sure what you're referring to in your last paragraph, but it sounds like you're advocating not using await within a using? Can't agree there. I'm also uncertain what you mean by "why would you make a async function with side effects on the stream", but that sounds like you're being too philosophical. Yeah, side effect free code is always better, but the reasons you'd make synchronous code with side effects is the same reason you'd make asynchronous code with side effects. Heck, I/O is the very definition of a side effect, and the best place to use async is with I/O.

I'd like to point out there's another reason to always "observe" a task's result, one that maybe will resonate with people better. Every time I've mentioned that you should always observe a task, someone always wants to avoid it (probably because you have to write more code) for "fire and forget" methods. After all, you're not "forgetting" if you have to keep ahold of this Task in order to observe the result. When you point out the exception topic they'll respond with, "but my Task won't throw an exception" (presumably they have a blanket try/catch). My response is "not good enough." Why?

When the main thread of your application ends, the process goes away unceremoniously. Any work your background task may have been running will just end midstream. Imagine you've got a background task running that's writing data to a file asynchronously and the user exits the application, causing your task to be interrupted and only part of the file gets written. Disaster! While you can still contrive scenarios where even that doesn't matter, I'd say it's too risky that code changes will put you back into this disaster mode. Always observe your tasks!

@Dev Some of what you said I can agree with, and some absolutely not. You want a strongly typed language? Try Haskell. Oh, BTW, Haskell relies heavily on type inference (your hated var). Can you write bad code with it? Yes, but that's true of every API, operator, keyword and feature.

You obviously have a strong idea of what "good maintainable code" looks like. Doesn't it make sense to use tools to enforce that, and make it easier? You hate var. Wouldn't you want something to ensure your team doesn't use it then?

Pretty good "webinar". In the video it was mentioned (paraphrasing) all DI containers do reflection. This actually isn't true. What's true is that MOST containers do reflection, but there are some out there that use no reflection. In fact, it just takes a few lines of code to create a container that doesn't use reflection at all.

I wrote that code in LinqPad, so there's some adjustments to make if you put this into a console application (replace the call to "Dump" with Console.WriteLine calls instead, basically), and I've used several new C# language features to condense the code as much as possible, but this should be very understandable to anyone. There's zero reflection in this code, and yet I'm using a container to resolve dependencies.

Ignoring the fact that this isn't production quality code (no error handling, for instance), this naïve container is fully usable in any application you'd write. The drawbacks: registration is more complex and there's some features missing, like the ability to tell the container to only ever create one instance of the ILogger no matter how many times you inject it, or to handle lifetime issues. There are some containers available in NuGet that properly handle errors and add some of the missing features, but continue to use the same "no reflection" design this simple code does.

As for concern about reflection cost if a container does use it... a good container does reflection only during registration/container build time. Internally it's using the same mechanism this naïve code uses, associating a type with a factory delegate, so when you Resolve there's no reflection at all. This means there's very little impact to the performance of your code (the only cost is a minor cost at startup and some dictionary lookups when you resolve). So the speaker was absolutely correct... you'll be hard pressed to notice any performance differences when you use a DI container.

I wasn't suggesting you trust StackOverflow... I was being lazy. The answers there referenced the specification, which is what I should have hunted down and linked to. As is always appropriate, you should trust the spec before any other source, which includes P&P. Especially when that page is so vague. I could be wrong, but what I believe the page is truly pointing out is that determining the order in which initialization occurs isn't always possible, much less easy to do. So, if your initialization depends on other static members being initialized, yes there can be race conditions or even other factors that cause you problems here. That's not the case in most Singleton classes, however.

I'll grant you that avoiding the DCL is an opinion... but one that's founded on a great deal of knowledge of this space. That P&P page that states "The common language runtime resolves issues related to using Double-Check Locking that are common in other environments." is correct, but remarkably misleading. The common language runtime provides enough guarantees and features to enable you to write the pattern correctly, but it's still a terribly complicated topic that most people don't understand and can easily code incorrectly. You've done so correctly here, and given good comments about what's needed, but in the video you demonstrate a lack of actually understanding it. Just my opinion, but when there's alternatives to such dangerous patterns we're better suited to use them instead, and most certainly should be teaching them.

On IDisposable, I ask again, when would you call Dispose? The only logical time you could do this would be at application shutdown, which is pointless. The act of shutting the application down will already clean things up. There's no point in explicitly doing so. Your example with NHibernate seems incomplete. When did they call Dispose in that example? I'm willing to bet they didn't.

I'd love to hear where P&P suggests you set a variable to null in a Dispose method. It's a pointless thing to do. If there were a reason to do this then EVERY class should implement IDisposable and we'd be writing a lot of boilerplate code to null out our members and wrapping absolutely everything in using statements. If you follow the "IDisposable pattern" you only implement IDisposable if your type contains other IDisposable types or directly uses unmanaged resources. Your Singleton doesn't, and should NOT be IDisposable, and far more importantly should not have a finalizer. There's significant cost to adding a finalizer to a class, and given we have safe handles (and the pattern we can follow for other resources) I believe the "IDisposable pattern" is broken and we should never (unless implementing a safe handle) implement a finalizer. That bit is opinion, but based on technical reasoning. Don't let that tangent derail your thinking here though, as it doesn't really have any bearing on the fact that you have no cause to use IDisposable here.

This has been a great series so far, but we just went off the rails. Figures we'd do so with the "anti-pattern" that is the Singleton. :)

First, I want to clear up a common misconception. The Singleton design pattern is probably the worst named of all of the design patterns, because according to the GoF book the pattern constrains the number of instances created. Note that it does NOT say there's only one instance. For example, a class used to talk to a server could be designed as a Singleton. If we know there's exactly two such servers we can talk to then exposing a ServerA and a ServerB property means we're still following the Singleton pattern even though we create an instance of the class for both of those. I know, not the best example of this, but strictly speaking this is what Singleton does. It restricts the number of instances, usually but not exclusively to one.

In your implementation you use the DCL. I firmly believe you should never do this. The nuances for getting it right (in many languages you can't) are complicated. In the video several times you admit to not fully understanding it, and make some declarations about this that are not entirely accurate (such as stating volatile will cause it to wait until the object is fully constructed... this statement shows you understand the issue exists, but not what that issue is exactly, or how this fix works). So, since there's really no reason to use this construct, it's better not to. In this case, static construction/initialization would deal with this for you. Don't agree that it does? OK, use Lazy<T> instead then. Don't rely on a very low level construct you don't fully understand here, use a higher level construct you can reason about instead.

You threw in IDisposable in this discussion for some reason, and really messed it up. Firstly, the whole point of a Singleton is that it will exist for the entire lifetime of the application, so there's exactly zero need to make it IDisposable. Who's going to call Dispose and when? The Main method before exit? What purpose does that serve? You probably made this mistake because it appears you don't understand the difference between IDisposable and finalizers. You seemed to indicate in the video that the GC will call Dispose. It won't. That's why your finalizer calls it. You did follow the "IDisposable pattern" correctly, but frankly that pattern is broken and wrong (explaining that statement would take too long here, but the reality is you probably should never code a finalizer yourself). However, your implementation for Dispose is pointless at best. Setting members to null in Dispose is in all likelihood a waste of time and effort. Theoretically this could allow an object to be collected sooner, but in reality this will hardly ever be true, and unless that object tracks unmanaged resources there's no benefit to it happening sooner anyway. No, Dispose is needed only if you have members that need to be disposed (or, ignoring my comment about the "IDisposable pattern" being broken, to release unmanaged resources). Bottom line, including IDisposable here was wrong, will confuse far too many people, and you didn't do it correctly in any event.

I try to stay out of them as well. I usually use pull, but unlike the site I posted that's on the other side, I'm not going to try and convince you I'm right. :) Understand the difference (there's not much) and decide for yourself which you prefer, because it is just a preference (unless you're in one of the edge case scenarios where you really do want to just compare or need to compare before deciding to merge).

This says to fetch the 'master' branch from the remote named 'origin'. What's not spelled out in the command is where it actually fetches to. It fetches the remote branch into a local branch that it names 'origin/master'. Maybe you're seeing the answer already. :)

> git merge origin/master

This says to merge the branch named 'origin/master' into the branch your currently in. This 'origin/master' branch is a local branch, not a branch on a remote.

So, 'origin/master' is just a branch naming convention used to distinguish branches that have been fetched from remote repositories.

BTW, I didn't care for your explanation on when to use fetch/merge vs. pull. If there are conflicts, a pull still allows you to resolve those conflicts, so that's not really a reason to use fetch/merge instead. Frankly, the difference is mostly just a religious debate. There's a good number of folks that recommend fetch/merge, such as this one: https://longair.net/blog/2009/04/16/git-fetch-and-merge/. The reasoning is highly opinionated, however, and I find it funny that most of the folks that hold this opinion do a fetch and then immediately do a merge without doing anything else in between. If that's what your'e doing then you're just typing more as that's exactly what a pull does. The only time fetching without pulling is beneficial is when you want to be able to compare your local version with the remote version before or even without merging. That's really it. I actually prefer to use pull most of the time for this very reason.