Azure has been churning out features faster than anything I can ever remember coming out of Microsoft. I finally moved to Azure with my production apps this weekend. It was a pretty painless move, as far as data center moves go. One of the biggest road blocks that I hit was trying to create virtual applications for each of my apps inside one Azure Web Site. I wanted to go this way to get the benefits of having one DNS entry, coupled with an SSL certificate, and eventually wildcard subdomains. It seemed the smart way to go, given that each app is isolated and I've been running that way on my own IIS server for a couple years (or more) now.

The key to getting things right is to select your Azure Web Site in the portal, and go to Configure. Then, at the bottom of the page you can add your virtual applications. The placeholder text says "PHYSICAL PATH RELATIVE TO SITE ROOT". Don't listen to that. Instead, make your virtual applications live underneath site\wwwroot.

If you do that, you can then download the Publish Profile Settings from your dashboard and update the <DeployIisAppPath> element to look like this:

<DeployIisAppPath>MySite/app1</DeployIisAppPath>

Now when you deploy, the files will land on the server in the right directory.

I couldn't find a way to get the deploy to put the files in the site\app1 directory on the server. DeployIisAppPath and RemoteSitePhysicalPath could not get me up to that level in this setup. If you know of a better way, I'd love to hear about it!

Using VMware Fusion 5 to run Windows 8 Pro 64-bit in a bootcamp partition from the Mac has been amazing. One bit of trouble recently popped up for me, though, when we moved offices to a new location. My VMWare networking uses Bridged mode, and everything works perfectly at home. But at the office, I was getting nothing but the warning that I had no internet.

I called VMware support, and they did a laudable job in getting back to me over the phone and trying to help, but in the end, the recommended solution was either to run under Bridged mode with the Firewall disabled, or use NAT. I figured out after the call that the reason disabling the Firewall made things work is that it would then not use the option to Block connections. So my search started to figure out how to change the network type from Public to Private.

The final answer was rather simple. Launch explorer, select Network on the left, and then click the yellow bar that says to Enable sharing. Doing that marked the network as Private. I seem to recall being offered the choice to turn sharing on or not the first time I hit it at the new location, so I probably said Don't Share, which resulted in the network being marked as Public, which meant the Firewall rules pertaining to Public were applied, and why I couldn't access anything.

Here are my final notes on getting HockeyApp to integrate nicely with MonoTouch:

Use the MonoTouch bindings from GitHub. I cleaned up the sample to demonstrate what happens when an exception is thrown, and also added the DebugLogEnabled property binding in case things aren't working. Also be sure to go through the readme in the github project to add the EnableCrashReporting() method.

Place a copy of the HockeySDKResources.bundle in the root of your project directory (a bundle is really a directory of files). Select the Display All Files option from the Solution (not the project).

If adding a reference to the precompiled HockeyApp.dll, and if you get build errors when compiling to device (error MT5202: Native linking failed. Please review the build log), add the following to your Additional mtouch arguments: -gcc_flags "-framework CoreGraphics -framework CoreText -framework QuartzCore"

Be sure to do a Rebuild All, as the UUID of the dSYM doesn't update unless you do this, which will cause problems symbolicating the crash log.

Make certain that your info.plist has a "Bundle versions string (short)" (CFBundleShortVersionString) entry.

HockeyApp deals with unhandled exceptions. So if you had put a try/catch around your Main.cs, that exception will not be unhandled and will not get processed by HockeyApp. Ask me how I know... :(

I was using TestFlight for my existing Xamarin MonoTouch project, but it didn't really do what I wanted. I wanted something that would centralize crash reporting for me when something went wrong with my app after it was downloaded from the App Store. TestFlight never really worked for that (at least for me). It allowed me to send beta versions of my app, which was cool, but I wanted a bit more.

Place the bundle in the root of my project directory (a bundle is really a directory of files). The instructions on that site no longer work to just Include in Project, so I did an "Add Files from Folder" option

I was getting a build error when compiling to device (error MT5202: Native linking failed. Please review the build log. Which was really due to Undefined symbols for architecture armv7:
"_CTFontCreateCopyWithAttributes"). I fixed that by adding the following into my Additional mtouch arguments:
-gcc_flags "-framework CoreGraphics -framework CoreText -framework QuartzCore"

After all of that, I was able to get things compiled. I then followed the deployment suggestions from the readme.md page on the github project.

I was able to upload the dSYM.zip and ipa files, and get the email invite and tried to download the app. I initially received an "Unable to install the app at this time" error message. That led me to believe the device wasn't imported, so I tried to import using the hockey app bookmarklet. It brought me a list of existing UDIDs back to the hockey app site, but the import did not save those devices. I need to drill down on this some more to get a solid list of steps needed to release both beta and app store versions.

I'd also like to be able to have logging data stream to the hockeyapp server and correlated to a specific device. And I'm not 100% certain I've done things properly, but this was a good cookbook on what I did to get things at least uploaded to the hockeyapp server. If I can get this to work, it's well worth the $10. per month.

Unfortunately, it's not as easy as just throwing the second drive in, moving the partition over and being done. For starters, the Bootcamp Assistant does a little bit more than simply add a Windows partition for you, so creating your own partition didn't seem to work. Bootcamp Assistant did create a parition for me on my second drive, but I then had a partition on both drives named "BOOTCAMP". I then decided the machine was ready for Windows 8 64 bit, but unfortunately, I was greeted with a black screen and blinking cursor when trying to boot off the external USB drive to finish off the Windows install.

After all of that pain (and various utilities to clean up the bad things I did in the last paragraph, like: /sbin/fsck -fy), and some failed attempts to use VMWare Fusion to restore the prior BOOTCAMP partition, I found this thread in macrumors.com. I'm reposting a portion of the brilliant post be richlee111 that finally got things working. Granted, it was a lot of opening and closing of the MacBook, and I had to deal with a stripped screw on one occasion, but everything worked beautifully thanks to his advice:

So if you want to run 2 HDDs from your Macbook, with one being for boot camp, the steps below worked for me:
- Take out the MCE optibay and put back the superdrive into its original location.
- Install the drive that you want to install boot camp into the original HDD drive bay.
- Stick the original OSX install disk into the superdrive and first install Mac OSX onto it.
Realize that you are only doing this to run the boot camp install and will be wiping it out later.
- After you have installed OSX, go through the initial setup and be at the desktop. Run the
boot camp assistant and go through with the install and have it create a partition for boot camp.
At this point, it doesn't really matter how big/small the patition is for Windows. You can adjust
and resize the partition during the Windows install process for choosing the location and partition.
- Go through finishing the boot camp assistant in OSX, stick your Windows install CD into the drive
and boot into it. This time it should work.
- Once you have completed the Windows installation and you are at the Windows desktop, stick the
Mac OSX cd back into the drive and run the setup.exe. This will install all the drivers that will
make it recognize all the Mac hardware, etc.
- Finally, take out the CD drive, swap back in the optibay, put your boot camp HDD in there, and
put back the HDD with your Mac OS.

After getting my WHS Console issue resolved, I noticed that my Shared Folders tab listed several of my folders as "Failing (Check Health)". That led me to look at the Server Storage tab where one of the drives was reported as "Missing". The drive was dead. Not to worry, I thought, because I had Duplication turned on for those folders that I really did not want to lose (e.g. music, photos).

I spot-checked several items in those folders and noticed that they weren't accessible. I tried to copy them off of the WHS server to my local PC and was greeted with a "Error 0x8007048F: The device is not connected." message. I had removed the bad drive from the WHS and rebooted, and still, things looked bad.

Thanks to a post in the We Got Served Forums, the answer was to simply go to the WHS Console, Server Storage tab, and Remove the bad drive via software as well. After doing that, I had to let WHS do its thing, but after a couple of hours, all of my files are fine, the folders are listed as "Healthy", and I have a new hard drive on the way.

What I thought was a potentially devastating issue turned out to be handled easily, reliably, and gracefully through WHS. What an awesome device.

I've had pretty good luck with my WHS (v1) system. It's been mostly fire and forget. I've had good success restoring 2 computers from the automatic backup. In short, I've been pretty happy. This is the first of 2 posts of problems I've now had with it. One minor (this one), and the next, potentially devastating.

For a while now, I haven't been able to select the "Windows Home Server Console" menu to connect to my WHS Console. It would sit there forever and just error out.

I was able to RDP in to the box, though, and from there, I was able to get to the C:\Documents and Settings\All Users\Application Data\Microsoft\Windows Home Server\logs\console.*.log file. In there, I saw this:

I then noticed this file on the server was blank: c:\program files\windows home server\homeserverconsole.exe.config. I deleted it, and rebooted the server, and I was able to connect to the Console again. What I found after being able to connect, though, was most troubling. More on that later...

The new DisplayMode engine in ASP.NET MVC 4 is rather nice. By default, it will detect whether you come in from a mobile device or a desktop machine, and load up a specialized view for you based on that detection.

For example, if you were trying to navigate to http://www.example.com/Players from an iPhone, the default behavior will be to look for Players\Index.Mobile.cshtml, followed by Players\Index.cshtml. Whichever it finds first will be the view that gets used. Note that this also requires you to have a _Layout.Mobile.cshtml file.

This is great for new projects, but I have a non-trivial app with a lot of views that have been built over the years using the technique by Scott Hanselman. That approach would look for the file in Players\Mobile\Index.cshtml. I was not looking forward to renaming all of those files.

In order to use my already existing file structure, I added one class:

After installing the RTM version of VS2012 last week and upgrading my project, I noticed that my web site was broken. The real culprit seemed to be that Microsoft changed how bundling works (the process of combining and minifying resources like js script and css files) between the RC version and the RTM version. For an excellent background on what this feature can do for you, see this tutorial.

It turns out my real problem was with Kendo. They don't provide non-minified files in the trial version, so the new bundling mechanism wasn't including the minified assets, which meant the Kendo minified assets were getting stripped out when I was trying to run under debug. (Reference here). I was able to use the work-around offered on this thread, and things started working again.

However, there are a few wrinkles when using the bundling code, so I thought I'd capture my experiences here. For example:

For 3rd party components that don't ship with non-minified assets, you need a better way around the default bundling strategy provided in the kendo article above. The solution provided in that thread of removing the min files from the ignore list will end up duplicating other assets that do provide both a minified and regular version of their assets while running in debug mode, e.g. jquery. There is a workaround for this, though: If you specify your bundle file's pattern with the {version} macro, then the bundling framework is smart enough to include just the one copy of the asset. If you use a wildcard in the pattern (as shown in the thread), you will get duplicate min and non-min versions of the asset when you render. Here is what your code should look like:

When specifying a CDN location, right now, you can't use the {version} macro, so you end up with a hardcoded reference for your CDN link (1.7.1 in the sample) and a {version} macro for the non-CDN reference (which could be 1.7.1, 1.7.2, or anything else that exists in your solution). This means that when you update locally, you need to remember to keep those version numbers in sync. The CDN path could take the version you have locally and substitute it in the CDN path that you provide removing this requirement.

In the tutorial, there is a fallback script that they recommend you write after your Scripts.Render statement so you can gracefully fall back to the local version if the CDN version doesn't load. It would be much better if that fallback code would be emitted for you when you call Scripts.Render.

Speaking of CDN, it appears that there is a very tight coupling assumed between a bundle and a CDN path. In other words, you cannot include multiple assets in one bundle because the CDN path for the bundle assumes it is a reference to a specific file on the CDN. It would be better to have CDN paths be tied to each individual item in the bundle.

Also relating to CDN support: The CDN path will only be used if you set bundles.UseCDN to true AND you either have BundleTables.EnableOptimization set to true or compilation debug set to false in your web.config. Granted, you probably only want to use the CDN when you are pushing to production, but while trying to test things out, I was doing it locally and this caught me by surprise. The 2 items should be independent of each other. If not, why even bother having the UseCDN property?

It seems that there is quite a bit of friction in the current version of the bundling framework. Fortunately, it resides in the Optimization assembly which can be upgraded independently of the entire MVC framework. I hope Microsoft releases an update very soon to overcome these obstacles. I have every reason to believe that they will since I'm seeing the author of this assembly answering tons of questions on StackOverflow.