Category: Software Development (C#)

I’ve been meaning to write this post for a while. Lately, the places I’ve worked at have been pushing to go from Team Foundation Version Control (TFVC) or an SVN based derivative to Git. To me, they both were the same. They both handle branching. They both do all the same stuff. I just couldn’t put my finger on the difference between the two.

Over the past 8 months, I worked in a shop that converted from TFVC to Git. And, for some reason, working with the source control did feel different. I went from a big proponent of TFVC to actually liking Git better. And I *still* couldn’t figure out the difference.

It finally hit me. I was explaining the differences to someone at my new job, and I got a flash of insight that may have been obvious to everyone else…

Here’s what I finally understood:

TFVC or any SVN/PVCS/etc… based source control systems build their branches on history. When you branch, you start with the base, then build on top of that to get your files. TFVC is always looking backwards.

Git works off of changes. While branches start off of a specific base, you are able to apply Git changes to literally ANYTHING. It is very easy to apply a change to a completely different branch, as Git doesn’t worry about the base. It just cares about the change. Git allows you to look forward.

When I worked with Git, if I wanted to apply my changes, it was stupidly simple to create a new branch off of the master, and just apply my changes from my commits, and boom, my commits were up to date. Trying to do something like that on TFVC was insane because the comparisons started with the point where the branch occurred, and then had to be reconciled. Doable, but no where near as efficient.

After that, I now really enjoy Git for feature branching. It’s still a bit of a nightmare to keep track of everything, but the tools for that are getting better and better.

“Wow, this is great!”, you say. On the initial announcement, yeah, this seems like a great idea, something developers have been clamoring for for YEARS. No more Cygwin, no more crazy emulators, native everything Linux on a Windows box!

But, as Charles Dickens wrote in the tale of two cities, ‘It was the best of times, it was the worst of times.”

After getting over my initial giddiness of these VERY cool announcements, the question that came to mind is ‘WHY!?!?’ Why is Microsoft doing this? This isn’t just a ‘hey, lets do some cool experiments to get developers back’. These are very serious investments that are not being done on a lark.

First off, the Linux love affair seems to have started with Satya Nadella. I think under Steve Balmer, the words Linux, Unix, and OS X (and iOS) were banned. Microsoft had not successfully done cross platform software since the early days of Excel for the Mac, and porting it to Windows. Windows for Alpha, and Windows for Itanium never took off. Office for Mac was a red-headed step child. Heck, even getting Windows Mobile to run on the phones proved to be a huge challenge.

Since Mr. Nadella took charge, Linux has not only been unbanned, but it has completely been embraced. It started with the Mac and either Silverlight or Office. Since OS X is based upon BSD Unix, Microsoft had to come up with tools to allow them to develop for the Mac. Microsoft started out slow, but lately, they have been able to bring a pretty good parity to Office for the Mac. In doing so, they have build up some better understanding of developing for Unix, and are now applying that to Linux.

The one thing that worries me is that Microsoft seems to be developing somewhat of an inferiority complex. Microsoft under Bill Gates and Steve Balmer would have always been ‘we think ours is better, deal with it’. They would be almost as arrogant as Steve Jobs. The new Microsoft is almost apologetic. ‘Hey, we want to be where the cool stuff is, and we realize our stuff isn’t cool’. Which is sad, considering that the latest Visual Studio is awesome, Powershell is cool, and Windows 10 is probably the best OS they’ve ever done.

Am I excited by what Microsoft is producing? Sure! I love the fact that my skill set will start to be more cross-platform. I just want to know ‘why’. Yes, I understand this will help Azure, and that is where the future of Microsoft probably is. But, this seems like a LOT of resources are being poured into this Linux initiative, and there doesn’t seem like a way that Microsoft will make money. Microsoft is not Google, where they only play with cool.

For all of the developers and DevOps people out there, Visual Studio 2015 was released this week. I’ve been working with it for a week on the latest build of Win10 (WinX, build 10240), and I must say, I’m impressed. One of the MOST impressive things has been the ‘backwards compatibility’. My company uses Visual Studio 2013, and I was able to open my project in 2015 with no issues and then go back to 2013 with no problems, either. If I start using some of the new VB.NET features, I might have some issues, but I think I’ll be able to get everyone to switch to 2015 pretty quickly 🙂

One of the first things that I found in the changes to TFS is that there is a new option about what to do with the artifacts of the build (the source / deployable items). Previously, these would just go into some shared drop folder. It looks like now, these artifacts can be pushed back to the source control server. That makes the need for a shared drop location unnecessary. Also, it allows the artifacts to be downloaded directly from the internal or hosted TFS website. This is both a good and bad thing, as it’s great for not needing to have a shared location anymore for drops, but it does add time to the build to upload it to the server. Plus, that also means that, if you use VSO, your files are going up to Microsoft. Fortunately, sending to the build server is only an option, and the old style of dropping the files in a share is still around.

One more thing that I did want to point out is that Team Foundation Server and Visual Studio Online’s build engine has been updated AGAIN. When I first saw the change, I was like ‘WTF… in 4 version (2010, 2012, 2013, and 2015), they’ve completely redone the build system 3 times!?!’ I think the XML build process, al-la Ant was a pain in the rear. The move to XAML looked to be going in the right direction, but the build scripts were very complicated. When I saw the ‘new’ build system, my first thought was… ugh, really? Not interested.

Then I took a better look, and tried it…

After playing with it for a bit, I like the way that the new build engine incorporates the visual style of the XAML builds and the succinct style of the XML builds. There is a nice visual aspect to the builds, but there is enough customization that a good balance has been reached. Plus, it is very easy to extend the new system.

There are a couple of concepts to be aware of when using the new system. The old build system had some options turned on by default that would generate the structures correctly in the build. Those options are not on by default in the new system. The following article helped me with part of the configuration: http://www.deliveron.com/blog/building-websites-team-foundation-build-2015/ The import part of of the article is what the settings are for the MSBuild arguments:

Using that together in the MSBuild settings will build the projects in the same structure that the directory is in.

One last thing that has changed with the build system… If the new build system is used, the TFS build agent doesn’t need to be installed through the big setup.exe that prior to 2015 was needed. Now, on the web site, there is a link to download the agent by itself. After that, one just has to run the Powershell script to configure the agent.

All-in-all, Visual Studio and Team Foundation 2015 look like an excellent release!

This is the first Apple event I’m not really interested in. WWDC (dub-dub) has become Apple’s launch pad for the next version of their iOS and OS X (and now watch OS). There have been several hardware announcements at dub-dub which were very exciting, such as the retina MacBook Pro and the radically redesigned MacPro, but last year had nothing hardware-wise, and this year is looking the same. The Wall Street Journal is reporting no Apple TV, and just about every system short of the Mac Pro has been updated in the last 12 months. I doubt that a new type of product would be announced so soon after the watch, and certainly not at a developers conference.

So far, the rumors and banners point to updates to OS X, version 11, iOS 9, the first real OS for the Apple Watch, and a music subscription service. Ho-hum… I’m *very* happy with OS X 10.10 and iOS 8. Is there room for improvement? Of course! Is there going to be anything radical? Probably not. This is one of the real challenges of creating an OS update on a yearly schedule… it’s hard to do anything truly different in that amount of time.

The rumored music service actually bothers me on many levels. First and foremost, it’s the return of DRM. I currently like the fact that if I buy a song on iTunes, I can play it in one of my transcription programs. If I want to pull it into a video, that’s no problem. With the subscription service, it’s back to the bad old days of songs being locked into the players. Also, unless Apple ups their game to lossless, I’m not going to be terribly interested. I feel like the subscription services are starting to really hurt the music industry, as artists are getting even less and less for their work. Check out this article on Rolling Stone for more info http://www.rollingstone.com/music/news/the-new-economics-of-the-music-industry-20111025

What is ironic is that Microsofts Windows 10 (or WinX as I like to call it) is starting to be VERY interesting. The latest beta of WinX is very stable and very cool. Microsoft has realized that they have to do a better job with Windows, and it shows. WinX is the most OS X like version of Windows that I’ve seen. Plus, Microsoft is putting a lot of focus on being able to deploy WinX and be more developer / administrator friendly. The latest PowerShell has a ton of features that make dealing with large numbers of computers easier. I don’t see Apple trying to really push beyond what is in the basic Unix system.

My current project is using Knockout.js / jQuery to get data to HTML5 pages, and I need to get information about the logged in user. There are several ways to do this, but one of the simplest is to let the web services access the user information stored on the session.

Since I’ve gone back to programming, more things on this blog have been to help me remember what I did! This is one of the programming posts, music will be on my next entry, I promise!

More Developer’s Express fun… I both love and hate some of the things that DevExpress does with their tools. I’ve recently started using the XtraReports suite for some ‘more than trivial’ reports, and hit a snag. I had a devil of a time trying to solve this issue, it took almost a day to figure out a way to make this work.

Here’s the setup:

I had a report that had three different sets of data. The exact information is a Point of Sale daily report, needing sales, taxes, and payments, but that’s not pertinent to the problem here… To use the three disparate data sources, I started off using three sub reports with a master report that would only have one record. This worked grand! I was able to do the reports with no problem… but… I needed to have a total of all three sub reports. This is where my implementation came crashing down. The XtraReports sub reports have a great way to send information to the sub report, but it doesn’t have a very good way to get the information BACK to the original report. I tried everything I could think of for almost a day, but with no luck in using the sub reports.

After reading a LOT of pages and articles, I found out that the XtraReports had a very neat feature. The reports have what DevExpress calls a ‘DetailReportBand’ on a report. This allows you to have a fully nested sub report inside of the main report without it being an actual sub report in a completely separate report. Hmmm… combine this with the parameters feature, and I may have something… It was easy enough to move the sub reports into Detail Report Bands, and it actually simplified the code a bit. Hoo-rah! I still had to get the running total, though.

After a couple of false starts, I did get the running total working. To get this to work, a report parameter needs to be added to the report. Then, the easiest way that I found to get the subtotals into the parameter was to use the sub report’s summary total XrLabel’s SummaryCalculated event to add the amount to the parameter. Then, at the end, it was straight forward to set a label to the parameter to show the overall total.

I know this blog post may not make a lot of sense to most, but it will help me in the future when I have to do this again, and can’t remember what I did!

This nearly drove me crazy. I’ve been working with the Developer’s Express grid in Windows Presentation Framework (WPF), and have NEVER been able to find a way to center the data in the column. Turns out, it’s rather easy, BUT, it requires using the control nesting that makes WPF powerful and frustrating.

To center the text in the column, the dxg:GridColumn needs to have an EditSettings tag added. Inside of that tag, add an TextEditSettings tag and set the HorizontalContentAlignment to “Center”. Voila!

New gotcha that we hit today, and it’s one that is probably hit a lot. To prevent denial-of-service exploits, the default settings for the size of JSON data sent from the server is set relatively small. To change the JSON size, update the jsonSerialization maxJsonLength setting in the web.config file:

With my new job, I’m back to programming. Which means, more programming articles 🙂 So, if programming in .NET isn’t your thing, you can skip these 🙂

A couple of months ago, at my old job, I was speaking with a programmer who was interested in using Knockout.js, but couldn’t because the project that he was working on was an ASP.NET WebForms project. At the time, all I’d seen of Knockout.js was referred to using the ASP.NET MVC type projects, so I agreed with him that it was a shame, and moved on. Boy, do I wish I’d been a little more informed at that time.

It turns out that one can use Knockout.js and ASP.NET WebForms JUST FINE TOGETHER! It does take thinking a bit different about what the ASP.NET WebForms do, and it is sort of bastardizing the whole WebForms concept, but it can be done, and it’s fairly easy 🙂 By using this style of code, the WebForms become more like MVC pages rather than the usual “onPostBack” code.

One of the reasons I say that I’m starting a series is because I want to point out the little ‘gotchas’ that pop up when doing some of this coding. I’m still in the learning (and stealing of ideas) stage, and I want to document what I’m seeing 🙂 All of the things I’m writing at the moment are coming from issues that my new team is running into.

The first gotcha that we ran into was when creating a default Microsoft web forms project. The latest default project has a ton of great stuff set up, but some things that have been preset get in the way. Problem number one is AutoRedirectMode in the AppStart/RouteConfig.cs or AppStart/RouteConfig.vb file. The default setting is RedirectMode.Permanent. This needs to be changed to RedirectMode.Off to get the web method calls to work. (See this article: http://stackoverflow.com/questions/23667083/i-cannot-get-my-webmethod-to-work-in-asp-net. Note that the answer is in the comments, not the answers)

There are more challenges that we had to work through, so more article are one their way!

I ran into an interesting problem today… One of my clients is converting a Windows app’s database and business logic to web services. The transition has been surprisingly smooth. Well, smooth up until today. Since we are using the old, .asmx web services, passing user defined classes via the parameter list is almost a black art. We were trying to pass a class, BusinessObjectFoo, from the Windows application to the web service. The web service has a parameter that accepts BusinessObjectFoo, and both the client and the server reference the same library that contains the definition for BusinessObjectFoo. This should be a no-brainer, and just work, right? Unfortunately, it doesn’t. Microsoft abandoned .asmx improvements in .NET 3.0, and left some rather serious holes in the functionality of .asmx web services. The ability to pass a user defined class via the web service parameters happened to be one of them.

The crux of the problem is that when the client application pulls in the service reference, Visual Studio creates a new definition of the class from the web service’s WSDL rather than matching the class up with the libraries. (Apparently, this works correctly with WCF services). We tried several different solutions to try to allow passing the object via the web service, but no luck.

Enter my apology… in reviewing code for my normal job, I’d run across something that had been added to the code base with no explanation. That code was for Automapper. I got aggravated at the developer for throwing another ‘new toy’ into our project. I did do a little reading on understanding Automapper, but didn’t see why it had been added. After today, I apologize to those devs, this one looks pretty helpful.

Back to the story 🙂

What need to be done was to move the data from the normal, shared class into one of the web service / auto generated classes. The first try was to use an object copy routine. That didn’t go so well. After that, I realized that I should try the AutoMapper NuGet package, as both objects had the same field structure / interface, just different implementations. That’s when reading up a bit on Automapper, and trying it really saved the day. Instead of writing a big, honkin’ copy class between my shared object and the Web Service object, I just mapped the two with Automapper. Then, it was just a simple map call to populate the correct object with all of it’s data.

I’ve had a problem that has been driving me crazier than usual… I’ve been trying to get the TFS Build process to create the ‘drop’ directories based upon the projects, rather than this huge glob directory of every file from every project. Surprisingly, the default output of projects inside of TFS Build is to drop every file into one directory, EXCEPT for web sites. What I’ve run into is needing to have applications and utility programs that are part of the solution and need to be deployed to be in their own directories when the build completes. After spending a week of rather exotic solutions, including modifying the TFS Build definitions, writing all sorts of scripts, and looking at every package under the sun to solve the issue, I finally came across the CORRECT solution… tacking a property on to the MSBuild directive called GenerateProjectSpecificOutputFolder. Setting that to true outputs to the per project directory structure. This is EXACTLY what I’ve been looking for! Thank you, Jason Stangroome for this WONDERFUL find!!!

Update… If you use the ReleaseTfvcTemplate.12.xaml from Release Management 2013 Update 2 client directory, the tokenization steps are missing. Here is a link to a template that has the correct tokenization step, plus has the tokenization as a flag. Cool stuff…

Ok, I admit it… I’m an Apple snob. Apple has been firing on all cylinders since the launch of OS X 10.4, and hasn’t appeared to be slowing down. The switch to Intel drew me in, and I haven’t looked back. For work though, I live in a different world. I’m fortunate to work at a company that allows me to have a Mac Pro desktop, a Macbook Pro laptop, and an iPhone; iPhones and iPads rule the roost for phones and tablets. With Parallels, VMWare, RDP, and Back to My Mac, I can live in both worlds and be VERY happy. So, I do still keep up with Microsoft.

This week was a big week for the Evil Empire (Microsoft, not Apple!). Lots of goodies came out:

Windows 8.1 – This goes a LONG way to fixing the absolute nuclear disaster that Windows 8 is. 8.1 doesn’t fix everything, but it does fix a lot. It’s amazing that Microsoft realized how bad 8 was and worked quickly to resolve. Even better, 8.1 is a free, IN PLACE upgrade to Windows 8. Microsoft is finally learning how to do in place updates. Hopefully the days of reinstalling everything for the Windows crowd will soon be behind them. Also, Windows 8.1 is a more efficient OS. On the Virtual Machines that I run at work and home, Win 8.1 is MUCH faster than 7. My guess is that the reduced video card requirements help out in that area.

Visual Studio 2013 – This update is a lot bigger than most people realize. Visual Studio 2012 had the same flaw that Windows 8 had… it completely failed at doing the job it was supposed to do. The .NET 4.5 framework has gotten better and better, but VS 2012 made developing for it truly horrible. VS 2013 fixed a LOT of the issues. The Team Foundation part of VS 2013 is very usable, and the new IDE tools in 2013 make programs like Resharper and CodeRush not so much requirements any more. Plus, the Database tools have come back, and are better than ever. The loss of the scripting engine in the IDE still hasn’t been addressed, and it’s doubtful that it ever will come back, unless it is as Powershell.

Remote Desktop Client for non-Windows platforms – Microsoft released Remote Desktop clients for Android and iOS, plus did a SIGNIFICANT upgrade to the Mac desktop client. It’s funny, almost all the news sites have talked about the Android / iOS client, but none have reviewed the OS X client. It’s not perfect, but it certainly addresses a LOT of features that have been missing. Being able to use the Remote Desktop with a remote desktop gateway and true multi-monitor support have been great.

Hopefully, this represents a new direction from Microsoft. Many of their products have finally matured to the point of going from ‘it sorta works’ to ‘I love working with it’. Some of the examples are: Outlook.com receiving SMTP finally, which makes using the service with a non-Microsoft email client useful (deletes and reads are global!!! Horrray!) Skydrive rocks. Azure being competitive. Lots of little things across the board that just seem to finally be coming together.

This is a good source control and task management system for a 5 person team. The price can’t be beat, at least not right now. (Free) I was able to take a very small team and in the space of a couple of hours, get them set up on TFS and start getting their existing projects moved over. They have 2 1/2 developers working off shared drives for lots of projects. No source control, build system, or deployment system just gives me the heebie-jeebies… Fortunately, they were using Visual Studio 2010, so moving their projects was fairly straight forward. The only real downside is that everyone needs a Microsoft Live ID, but since they are free, it’s not a big deal.

This one had me banging my head for the last couple of hours… I have Entity A, B, and C in my Entity Diagram with A linking to B and B linking to C. I want to find all of the C items that are associated with a specific instance of A. Running through a range of ways to solve the issue got me no where, other than frustrated. Fortunately, I found the answer on StackOverflow.com with this article. Long and short of it is that one has to do a .Include statement in the LINQ query on the table to include everything needed. So, the code would look like this:

IIS Express is an awesome tool for development. Unfortunately, there are a couple of things that are not easily done that I need to do. Number one is setting up a web service application as a virtual directory. There isn’t an easy way to do this through the GUI tools. Fortunately, I found this article on StackOverflow.com. Basically, one just has to add a new application to the site with the Virtual Directory information.

Ok, this may seem pretty obvious, but it bit me a couple of times last week… When deploying a WCF service to a Windows 2008 / 2008 R2 server, make sure the WCF activation is checked on the list of installed features. Otherwise, the WCF service WILL NOT WORK! Doh!

In .NET 3.5, adding a Service Reference to a web service would automatically use the proxy server. In .NET 4.0 something changed. In 4.0, the call was failing with the error ‘The remote server returned an error: (407) Proxy Authentication Required.’ Trying to find the answer to this problem was a bit difficult as WCF 4.0 uses the term Proxy for the code proxies. It took a bit, and the answer was dead simple…

Today, I was implementing the AdRotator control on one of our main pages. The control needed to do the following… display an image if there was something to display, but if not, don’t display ANYTHING.

Unfortunately, what the control does if there is nothing to display is set the image source to an empty string. On Internet Explorer 8, this produces the following lovely result:

Doh! That’s not what I wanted!

Fortunately, there is a quick answer. The AdRotator control has an AdCreated event. By assigning a bit of code to it, one can check if the ImageUrl property is blank. At that point, it’s very easy to hide the control.

Ok, I finally get it… Recently, .NET code has started to look really funky. I’ve been reading a lot of the MSDN blogs, and the C# has been looking really weird. Stuff like (i => i > 5) or even (i => { Console.WriteLine(i); }) have been showing up, and I’ve been like ‘What the heck!?!’

I wasn’t getting this new code. I felt like the time I same my first Windows C program. (Where’s main()? How do the methods get called? Why’s there this giant switch statement?)

In an article entitle ‘Java vs. .NET developers’ by Greg Young, where he discussed Davy Brion’s article ‘At This Point, I’d Prefer Java Developers Over .NET Developers’, a couple of things hit me in the head… First off, Davy Brion mentioned that the 2nd year Java developers he interviewed seemed to understand many patterns and how to use them effectively. Heck, after reading his article, *I* had look up several of the things that he talked about. One of those that he mentioned was the Inversion of Control pattern. What I do find interesting is that IOC is just a fancy name for delegates. .NET developers have been using those for YEARS. What the article did do was raise my awareness of delegates and IOC.

A bit later, I was ready the pick-ax Ruby book, and came across how Ruby does it’s callbacks / delegates. It REALLY made sense. Then it hit me… This is was .NET is doing with the Lambda expressions! I then looked the Lambda expressions up, and re-read my books on .NET 3.5, and lo-and-behold, they made PERFECT sense! Being able to nicely add delegate code to a method call… really cool!

Yes, I’ve been away from blogging for a bit. I wish this one was something cool, but alas, it’s just a tip on development configurations.

So, here goes…

Problem:

My development team has a common development configuration file. This file contains a reference to a shared development database machine. This is fine, as long as everyone is using the same machine and instance of a database. But what if a developer wants to use a different database machine to test a change that breaks the database?

One Solution:

One thing that I came up with is to add a machine name to the HOSTS file. In Windows, this is under C:WindowsSystem32driversetc. Add a new entry to the file, something like DevelopmentDB. Then, just reference that machine name in the configuration files. Each developer can then have the DevelopmentDB machine pointing to different IPs, but still maintain a single reference in the configuration file.

The last couple of weeks have been a wee bit busy. I think I’ve been home in the evenings about once in 4 weeks. Between trying to work with a band, do my side work, and keep up with friends, life has been very hectic. And, it feels like things have stalled. For the last couple of months life has been ‘spinning it’s wheels’. Today changed a bit of that.

Since the middle of the summer, my company has been wanting to have a replicated SQL Server system. We knew what we wanted, but didn’t know HOW to implement it. So, at the beginning of August, I put together a plan to get our systems working. It has taken FAR longer than I expected, but today we hit a major milestone… we have an actual system replicating with the ability to do a manual failover. To get this working required an upgrade of a machine (for testing), a lot of reading, and some good old fashioned ‘hacking’.

The one tidbit that I wanted to drop on my blog so that I REMEMBER what to do is… ‘Set up the security certificates!’ This is a very important step, and is fairly easy to do. Unfortunately, the SQL Server documentation doesn’t place that much emphasis on that little step!