Category: Programming

I’ve been meaning to write this post for a while. Lately, the places I’ve worked at have been pushing to go from Team Foundation Version Control (TFVC) or an SVN based derivative to Git. To me, they both were the same. They both handle branching. They both do all the same stuff. I just couldn’t put my finger on the difference between the two.

Over the past 8 months, I worked in a shop that converted from TFVC to Git. And, for some reason, working with the source control did feel different. I went from a big proponent of TFVC to actually liking Git better. And I *still* couldn’t figure out the difference.

It finally hit me. I was explaining the differences to someone at my new job, and I got a flash of insight that may have been obvious to everyone else…

Here’s what I finally understood:

TFVC or any SVN/PVCS/etc… based source control systems build their branches on history. When you branch, you start with the base, then build on top of that to get your files. TFVC is always looking backwards.

Git works off of changes. While branches start off of a specific base, you are able to apply Git changes to literally ANYTHING. It is very easy to apply a change to a completely different branch, as Git doesn’t worry about the base. It just cares about the change. Git allows you to look forward.

When I worked with Git, if I wanted to apply my changes, it was stupidly simple to create a new branch off of the master, and just apply my changes from my commits, and boom, my commits were up to date. Trying to do something like that on TFVC was insane because the comparisons started with the point where the branch occurred, and then had to be reconciled. Doable, but no where near as efficient.

After that, I now really enjoy Git for feature branching. It’s still a bit of a nightmare to keep track of everything, but the tools for that are getting better and better.

“Wow, this is great!”, you say. On the initial announcement, yeah, this seems like a great idea, something developers have been clamoring for for YEARS. No more Cygwin, no more crazy emulators, native everything Linux on a Windows box!

But, as Charles Dickens wrote in the tale of two cities, ‘It was the best of times, it was the worst of times.”

After getting over my initial giddiness of these VERY cool announcements, the question that came to mind is ‘WHY!?!?’ Why is Microsoft doing this? This isn’t just a ‘hey, lets do some cool experiments to get developers back’. These are very serious investments that are not being done on a lark.

First off, the Linux love affair seems to have started with Satya Nadella. I think under Steve Balmer, the words Linux, Unix, and OS X (and iOS) were banned. Microsoft had not successfully done cross platform software since the early days of Excel for the Mac, and porting it to Windows. Windows for Alpha, and Windows for Itanium never took off. Office for Mac was a red-headed step child. Heck, even getting Windows Mobile to run on the phones proved to be a huge challenge.

Since Mr. Nadella took charge, Linux has not only been unbanned, but it has completely been embraced. It started with the Mac and either Silverlight or Office. Since OS X is based upon BSD Unix, Microsoft had to come up with tools to allow them to develop for the Mac. Microsoft started out slow, but lately, they have been able to bring a pretty good parity to Office for the Mac. In doing so, they have build up some better understanding of developing for Unix, and are now applying that to Linux.

The one thing that worries me is that Microsoft seems to be developing somewhat of an inferiority complex. Microsoft under Bill Gates and Steve Balmer would have always been ‘we think ours is better, deal with it’. They would be almost as arrogant as Steve Jobs. The new Microsoft is almost apologetic. ‘Hey, we want to be where the cool stuff is, and we realize our stuff isn’t cool’. Which is sad, considering that the latest Visual Studio is awesome, Powershell is cool, and Windows 10 is probably the best OS they’ve ever done.

Am I excited by what Microsoft is producing? Sure! I love the fact that my skill set will start to be more cross-platform. I just want to know ‘why’. Yes, I understand this will help Azure, and that is where the future of Microsoft probably is. But, this seems like a LOT of resources are being poured into this Linux initiative, and there doesn’t seem like a way that Microsoft will make money. Microsoft is not Google, where they only play with cool.

This is the first Apple event I’m not really interested in. WWDC (dub-dub) has become Apple’s launch pad for the next version of their iOS and OS X (and now watch OS). There have been several hardware announcements at dub-dub which were very exciting, such as the retina MacBook Pro and the radically redesigned MacPro, but last year had nothing hardware-wise, and this year is looking the same. The Wall Street Journal is reporting no Apple TV, and just about every system short of the Mac Pro has been updated in the last 12 months. I doubt that a new type of product would be announced so soon after the watch, and certainly not at a developers conference.

So far, the rumors and banners point to updates to OS X, version 11, iOS 9, the first real OS for the Apple Watch, and a music subscription service. Ho-hum… I’m *very* happy with OS X 10.10 and iOS 8. Is there room for improvement? Of course! Is there going to be anything radical? Probably not. This is one of the real challenges of creating an OS update on a yearly schedule… it’s hard to do anything truly different in that amount of time.

The rumored music service actually bothers me on many levels. First and foremost, it’s the return of DRM. I currently like the fact that if I buy a song on iTunes, I can play it in one of my transcription programs. If I want to pull it into a video, that’s no problem. With the subscription service, it’s back to the bad old days of songs being locked into the players. Also, unless Apple ups their game to lossless, I’m not going to be terribly interested. I feel like the subscription services are starting to really hurt the music industry, as artists are getting even less and less for their work. Check out this article on Rolling Stone for more info http://www.rollingstone.com/music/news/the-new-economics-of-the-music-industry-20111025

What is ironic is that Microsofts Windows 10 (or WinX as I like to call it) is starting to be VERY interesting. The latest beta of WinX is very stable and very cool. Microsoft has realized that they have to do a better job with Windows, and it shows. WinX is the most OS X like version of Windows that I’ve seen. Plus, Microsoft is putting a lot of focus on being able to deploy WinX and be more developer / administrator friendly. The latest PowerShell has a ton of features that make dealing with large numbers of computers easier. I don’t see Apple trying to really push beyond what is in the basic Unix system.

Since I’ve gone back to programming, more things on this blog have been to help me remember what I did! This is one of the programming posts, music will be on my next entry, I promise!

More Developer’s Express fun… I both love and hate some of the things that DevExpress does with their tools. I’ve recently started using the XtraReports suite for some ‘more than trivial’ reports, and hit a snag. I had a devil of a time trying to solve this issue, it took almost a day to figure out a way to make this work.

Here’s the setup:

I had a report that had three different sets of data. The exact information is a Point of Sale daily report, needing sales, taxes, and payments, but that’s not pertinent to the problem here… To use the three disparate data sources, I started off using three sub reports with a master report that would only have one record. This worked grand! I was able to do the reports with no problem… but… I needed to have a total of all three sub reports. This is where my implementation came crashing down. The XtraReports sub reports have a great way to send information to the sub report, but it doesn’t have a very good way to get the information BACK to the original report. I tried everything I could think of for almost a day, but with no luck in using the sub reports.

After reading a LOT of pages and articles, I found out that the XtraReports had a very neat feature. The reports have what DevExpress calls a ‘DetailReportBand’ on a report. This allows you to have a fully nested sub report inside of the main report without it being an actual sub report in a completely separate report. Hmmm… combine this with the parameters feature, and I may have something… It was easy enough to move the sub reports into Detail Report Bands, and it actually simplified the code a bit. Hoo-rah! I still had to get the running total, though.

After a couple of false starts, I did get the running total working. To get this to work, a report parameter needs to be added to the report. Then, the easiest way that I found to get the subtotals into the parameter was to use the sub report’s summary total XrLabel’s SummaryCalculated event to add the amount to the parameter. Then, at the end, it was straight forward to set a label to the parameter to show the overall total.

I know this blog post may not make a lot of sense to most, but it will help me in the future when I have to do this again, and can’t remember what I did!

This nearly drove me crazy. I’ve been working with the Developer’s Express grid in Windows Presentation Framework (WPF), and have NEVER been able to find a way to center the data in the column. Turns out, it’s rather easy, BUT, it requires using the control nesting that makes WPF powerful and frustrating.

To center the text in the column, the dxg:GridColumn needs to have an EditSettings tag added. Inside of that tag, add an TextEditSettings tag and set the HorizontalContentAlignment to “Center”. Voila!

New gotcha that we hit today, and it’s one that is probably hit a lot. To prevent denial-of-service exploits, the default settings for the size of JSON data sent from the server is set relatively small. To change the JSON size, update the jsonSerialization maxJsonLength setting in the web.config file:

With my new job, I’m back to programming. Which means, more programming articles 🙂 So, if programming in .NET isn’t your thing, you can skip these 🙂

A couple of months ago, at my old job, I was speaking with a programmer who was interested in using Knockout.js, but couldn’t because the project that he was working on was an ASP.NET WebForms project. At the time, all I’d seen of Knockout.js was referred to using the ASP.NET MVC type projects, so I agreed with him that it was a shame, and moved on. Boy, do I wish I’d been a little more informed at that time.

It turns out that one can use Knockout.js and ASP.NET WebForms JUST FINE TOGETHER! It does take thinking a bit different about what the ASP.NET WebForms do, and it is sort of bastardizing the whole WebForms concept, but it can be done, and it’s fairly easy 🙂 By using this style of code, the WebForms become more like MVC pages rather than the usual “onPostBack” code.

One of the reasons I say that I’m starting a series is because I want to point out the little ‘gotchas’ that pop up when doing some of this coding. I’m still in the learning (and stealing of ideas) stage, and I want to document what I’m seeing 🙂 All of the things I’m writing at the moment are coming from issues that my new team is running into.

The first gotcha that we ran into was when creating a default Microsoft web forms project. The latest default project has a ton of great stuff set up, but some things that have been preset get in the way. Problem number one is AutoRedirectMode in the AppStart/RouteConfig.cs or AppStart/RouteConfig.vb file. The default setting is RedirectMode.Permanent. This needs to be changed to RedirectMode.Off to get the web method calls to work. (See this article: http://stackoverflow.com/questions/23667083/i-cannot-get-my-webmethod-to-work-in-asp-net. Note that the answer is in the comments, not the answers)

There are more challenges that we had to work through, so more article are one their way!

Wow, this blog has been (mostly) active over 10 years. My writing has been pretty quiet since September. Not because of lack of activity, but for the complete opposite reason! Too much going on! One of the major things that has happened is that I left my job with my previous company, and have started working with a much smaller company. I’ve moved from DevOps back into Development, and have actually been working with Visual Basic.NET, Windows Presentation Framework, and Javascript / jQuery / Knockout.js. I have written more code in the last two months than I have in the last 3-4 years, and I’m VERY happy about that! I am now also mentoring a couple of developers, and doing a LOT of learning myself.

On the music front, they ‘year-and-a-bit more’ of gear continued. I swear I thought that I was done recently. Time has a way of changing that, though! I’ve picked up a LOT of sound gigs lately, and they are paying much better than the late night bar gigs. To continue doing them, though, I’ve had to make a couple of updates, which I’ll be reviewing soon. Plus, my taste in guitars has changed a bit recently, too. I’ve got to say, Fender and Gibson have stepped up their game in the last couple of years, and I think that is directly correlated to the fact that PRS guitars sound and build quality is amazing.

In .NET 3.5, adding a Service Reference to a web service would automatically use the proxy server. In .NET 4.0 something changed. In 4.0, the call was failing with the error ‘The remote server returned an error: (407) Proxy Authentication Required.’ Trying to find the answer to this problem was a bit difficult as WCF 4.0 uses the term Proxy for the code proxies. It took a bit, and the answer was dead simple…

A while back, I had written about Microsoft and Tight Coupling. My development team is facing the same issue, if on a much smaller scale. We have one database and one common library for 5 different programs. We are doing this to appease the great god, Reusability. At one point, we had several different libraries that worked together. The problem that we ran into was that when a change was made in one library, it was not properly tracked down in the other libraries. This would lead to failed builds. Not a good thing.

On the other hand, we now have a more, ah, interesting problem. When one piece of software goes to production, they ALL have to go to production. One stored procedure change can change the behavior of the common library, which effectively changes the behavior for all of the applications. We have started merging the applications, but it is a slow and time consuming process.