June 2012 Entries

Most developers write Windows Phone applications for their own gratification and their own wallets. While most of the time I would put myself in the same camp, I am also a consultant. This means that I have corporate clients who want corporate solutions. I recently got a request for a system rebuild that includes a Windows Phone component. This brought up the questions of what are the important aspects to consider when building for this situation.

Let’s break it down in to the points that are important to a company using a mobile application. The company want to make sure that their proprietary software is safe from use by unauthorized users. They also want to make sure that the data is secure on the device.

The first point is a challenge. There is no such thing as true private distribution in the Windows Phone ecosystem at this time. What is available is the ability to specify you application for targeted distribution. Even with targeted distribution you can’t ensure that only individuals within your organization will be able to load you application. Because of this I am taking two additional steps. The first is to register the phone’s DeviceUniqueId within your system. Add a system sign-in and that should cover access to your application.

The second half of the problem is securing the data on the phone. This is where the ProtectedData API within the System.Security.Cryptography namespace comes in. It allows you to encrypt your data before pushing it to isolated storage on the device.

With the announcement of Windows Phone 8 coming this fall, many of these points will have different solutions. Private signing and distribution of applications will be available. We will also have native access to BitLocker. When you combine these capabilities enterprise application development for Windows Phone will be much simpler. Until then work with the above suggestions to develop your enterprise solutions.

As if the Surface announcement on Monday wasn’t exciting enough, today Microsoft announce that Windows Phone 8 will be coming this fall. That itself is great news, but the features coming were like confetti flying in all different directions. Given this speed I couldn’t capture every feature they covered. A summary of what I did capture is listed below starting with their eight main features.

Common Core

The first thing that they covered is that Windows Phone 8 will share a core OS with Windows 8. It will also run natively on multiple cores. They mentioned that they have run it on up to 64 cores to this point. The phones as you might expect will at least start as dual core. If you remember there were metrics saying that Windows Phone 7 performed operations faster on a single core than other platforms did with dual cores. The metrics they showed here indicate that Windows Phone 8 runs faster on comparable dual core hardware than other platforms.

New Screen Resolutions

Screen resolution has never been an issue for me, but it has been a criticism of Windows Phone 7 in the media. Windows Phone 8 will supports three screen resolutions: WVGA 800 x 480, WXGA 1280 x 768, and 720 1280x720. Hopefully this makes pixel counters a little happier.

MicroSD Support

This was one of my pet peeves when I got my Samsung Focus. With Windows Phone 8 the operating system will support adding MicroSD cards after initial setup. Of course this is dependent on the hardware company on implementing it, but I think we have seen that even feature phone manufacturers have not had a problem supporting this in the past.

NFC

NFC has been an anticipated feature for some time. What Microsoft showed today included the fact that they didn’t just want it to be for the phone. There is cross platform NFC functionality between Windows Phone 8 and Windows 8. The demos , while possibly a bit fanciful, showed would could be achieved even in a retail environment. We are getting closer and closer to a Minority Report world with these technologies.

Wallet

Windows Phone 8 isn’t the first platform to have a wallet concept. What they have done to differentiate themselves is to make it sot that it is not dependent on a SIM type chip like other platforms. They have also expanded the concept beyond just banks to other types of credits such as airline miles.

Nokia Mapping

People have been envious of the Lumia phones having the Nokia mapping software. Now all Windows Phone 8 devices will use NavTeq data and will have the capability to run in an offline fashion. This is a major step forward from the Bing “touch for the next turn” maps.

IT Administration

The lack of features for enterprise administration and deployment was a complaint even before the Windows Phone 7 was released. With the Windows Phone 8 release such features as Bitlocker and Secure boot will be baked into the OS. We will also have the ability to privately sign and distribute applications.

Changing Start Screen

Joe Belfiore made a big deal about this aspect of the new release. Users will have more color themes available to them and the live tiles will be highly customizable. You will have the ability to resize and organize the tiles in a more dynamic way. This allows for less important tiles or ones with less information to be made smaller.

And There Is More

So what other tidbits came out of the presentation?

Later this summer the API for WP8 will be available. There will be developer events coming to a city near you. Another announcement of interest to developers is the ability to write applications at a native code level. This is a boon for game developers and those who need highly efficient applications. As a topper on the cake there was mention of in app payment.

On the consumer side we also found out that all updates will be available over the air. Along with this came the fact that Microsoft will support all devices with updates for at least 18 month and you will be able to subscribe for early updates.

Update coming for Windows Phone 7.5 customers to WP7.8. The main enhancement will be the new live tile features. The big bonus is that the update will bypass the carriers. I would assume though that you will be brought up to date with all previous patches that your carrier may not have released.

There is so much more, but that is enough for one post. Needless to say, EXCITING!

I make my living as a consultant and a general technologist. I credit my success to the fact that I have never been afraid to pick up any product, language or platform needed to get the job done. While Microsoft technologies I my mainstay, I have done work on mainframe and UNIX platforms and have worked with a wide variety of database engines. Each one has it’s use and most times it is less expensive to find a way to communicate with an existing system than to replace it.

So what are the main benefits of expending the effort to learn a new technology?

New ways to solve problems

Accelerate development

Advise clients and get new business opportunities

By new technology I mean ones that you haven’t had experience with before. They don’t have to be the the one that just came out yesterday. As they say, those who do not learn from history are bound to repeat it. If you can learn something from an older technology it can be just as valuable as the shiny new one. Either way, when you add another tool to your kit you get a new view on each problem you face. This makes it easier to create a sound solution.

The next thing you can learn from working with different products and techniques is how to more efficiently develop solve problems. Many times if you are working with a new language you will find that there are specific design patterns that are used with it in normal use. These can usually be applied with most languages. You just needed to be exposed to them.

The last point is about helping your clients and helping yourself. If you can get in on technologies early you will have advantage over your competition in the market. You will also be able to honestly advise you client on why they should or should not go with a new product. Being able to compare products and their features is always an ability that stake holders appreciate.

You don’t need to learn every detail of a product. Learn enough to function and get an idea of how to use the technology. Keep eating those technology Wheaties and you will be ready to go the distance in any project.

TechEd this week was a great experience and I wanted to wrap it up with a summary post.

First let me say a thank you to John and Jeff from GWB for supplying power, connectivity and a place to work in between sessions. The blogging hub was a great experience in itself. Getting to talk with other bloggers and other conference goers turned into a series of interesting conversations. And where else can you almost end up in the day 1 highlights video?

The sessions at TechEd were a mixed bag of value. The Keynotes rocked, both figuratively and literally and most of the sessions that I want to were a good experience and had gems of information to take away. There were a few exceptions though. A couple of the sessions turned out to be sales jobs. Nothing turns me off more than that (there will be some really honest comments on those surveys).

TechEd re-enforced for me that much of the value is not in the sessions, but in the networking opportunities. I got to talk with several Microsoft team members and MVPs as well as some of the vendor representative for companies like Inrule and ComponentOne. Also got to expand both my local and extended community with discussions at meal times and waiting for sessions to start. I think this is one of the benefits that a lot of people don’t take advantage of in these conferences that should be a bigger part of the advertising.

Exposure to a wide variety of topics, many of which I had not been able to make time for up to this point was envigorating. The list of topic includes: Office 365, Windows Server 2012, Windows 8, Metro, Azure. I can’t wait to get back to work and dig into these subjects in more depth.

The one complaint that I had and heard from other attendees was that there weren’t enough sessions that were actually about development. I realize that TechEd started as an event for IT Pros, but there needs to be more value for the Devs.

It all went by too fast and it will take a couple more days to digest the material, but the batteries are and I’m ready to leverage what I’ve learned. Hopefully we will do it again next year.

While I spend a certain amount of my time creating databases (coding around SQL Server and setup a server when I have to) it isn’t my bread and butter. Since I have run into a number of time that SQL Server needed to be tuned I figured I would step out of my comfort zone and see what I can learn.

Brent Ozar packed a mountain of information into his session on making SQL Server faster. I’m not sure how he found time to hit all of his points since he was allowing the audience abuse him on Twitter instead of asking questions, but he managed it. I also questioned his sanity since he appeared to be using a fruit laptop.

He had my attention though when he stated that he had given up on telling people to not use “select *”. He posited that it could be fixed with hardware by caching the data in memory. He continued by cautioning that having too many indexes could defeat this approach. His logic was sound if not always practical, but it was a good place to start when determining the trade-offs you need to balance. He was moving pretty fast, but I believe he was prescribing this solution predominately for OLTP database prior to moving on to data warehouse solutions.

Much of the advice he gave for data warehouses is contained in the Microsoft Fast Track guidance so I won’t rehash it here. To summarize the solution seems to be the proper balance memory, disk access speed and the speed of the pipes that get the data from storage to the CPU. It appears to be sound guidance and the session gave enough information that going forward we should be able to find the details needed easily. Just what the doctor ordered.

Usually speakers take offence if you wear headphones in their talk. For the exam cram session it was a requirement. This was because it was a cubical walled room with an open top next to a study hall.

While no-one was going to come out of this session ready to take a test, I am glad that I took the time to attend it. There was a fair amount of material that you should know already if you have ever taken a certification test before. This was packed around a mix of key concepts and some tidbits that marked where some of the pitfalls are for this particular test. The biggest warning was that the test is based on Windows Phone 7.0 and not Mango meaning that you have to be careful that you don’t answer a question in the wrong context.

I would suggest if you have a chance to take attend a free session grab it. It is a good break from the other hard core talks and will get your mind into a mode for getting your next certification. Good luck.

I haven’t spent any time looking at Office 365 up to this point. I met Donovan Follette on the flight down from Chicago. I also got to spend some time discussing the product offerings with him at the TechExpo and that sealed my decision to attend this session.

The main actor of his presentation is the BCS – Business Connectivity Services. He explained that while this feature has existed in on-site SharePoint it is a valuable new addition to Office 365 SharePoint Online. If you aren’t familiar with the BCS, it allows you to leverage non-SharePoint enterprise data source from SharePoint. The greatest benefactor is the end users who can leverage the data using a variety of Office products and more.

The one thing I haven’t shaken my skepticism of is the use of SharePoint Designer which Donovan used to create a WCF service. It is mostly my tendency to try to create solutions that can be managed through the whole application life cycle. It the past migrating through test environments has been near impossible with anything other than content created by SharePiont Designer.

There is a lot of end user power here. The biggest consideration I think you need to examine when reaching from you enterprise LOB data stores out to an online service and back is that you are going to take a performance hit. This means that you have to be very aware of how you configure these integrated self serve solutions. As a rule make sure you are using the right tool for the right situation.

I appreciated that he showed both no code and code solutions for the consumer of the LOB data. I came out of this session much better informed about the possibilities around this product.

While digesting my lunch it was time to digest some TFS Build information. While much of my time is spent wearing my developer’s hat I am still a jack of all trades and automated builds are an important aspect of any project. Because of this I was looking forward to finding out what new features are available in the latest release of Team Foundation Server.

The first feature that caught my attention is the TFS Admin Client. After being used to dealing with NAnt in the past it is nice to see a build a configuration GUI that is so flexible and well thought out. The bonus is that it the tools that are incorporated in Visual Studio 2012 are just as feature rich. Life is good.

Since automated builds are the hub of your development process in a continuous integration shop I was really interested in the process related options. The biggest value add that I noticed was merge gated check-ins. Merge or batch gated check-ins are an interesting concept. If the build breaks with all the changes then TFS will run separate builds for each of the check-ins. This ability to identify the actual offending check-in can save a lot of time and gray hair.

The safari of TFS Build that was this session was packed with attractions. How do you set it up builds, what are the different flavors of builds, how does the system report how the build went? I would suggest anyone who is responsible for build automation spend some serious time with TFS 2012 and VS2012.

My morning sessions for day three were dominated by Team Foundation Server. This has been a hot topic for our clients lately, so this topic really stuck a chord.

The speaker for the first session was from Boeing. It was nice to hear how how a company mixes both agile and waterfall project management. The approaches that he presented were very pragmatic. For their needs reporting is the crucial part of their decision to use TFS. This was interesting since this is probably the last aspect that most shops would think about.

The challenge of getting users to adopt TFS was brought up by the audience. As with the other discussion point he took a very level headed stance. The approach he was prescribing was to eat the elephant a bite at a time instead of all at once. If you try to convert you entire shop at once the culture shock will most likely kill the effort.

Another key point he reminded us of is that you need to make sure that standards and compliance are taken into account when you setup TFS. If you don’t implement a tool and processes around it that comply with the standards bodies that govern your business you are in for a world of hurt.

Ultimately the reason they chose TFS was because it was the first tool that incorporated all the ALM features that they needed. Reduced licensing cost because of all the different tools they would need to buy to complete the same tasks. They got to this point by doing an industry evaluation. Although TFS came out on top he said that it still has a big gap is in the Java area. Of course in this market there are vendors helping to close that gap.

The second session was on how continuous feedback in agile is a new focus in VS2012. The problems they intended to address included cycle time and average time to repair, root cause analysis.

The speakers fired features at us as if they were firing a machine gun. I will just say that I am looking forward to digging into the product after seeing this presentation. Beyond that I will simply list some of the key features that caught my attention.

Paul Sheriff was a real character at the start of his MVVM in XAML session. There was a lot of sarcasm and self deprecation going on prior to the . That is never a bad way to get things rolling right after lunch. Then things got semi-serious.

The presentation itself had a number of surprises, but not all of them had to do with XAML. When he flipped over his company’s code generation tool it took me off guard. I am used to generator that create code for a whole project, but his tools were able to create different types of constructs on demand. It also made it easier to follow what he was doing than some of the other demos I have seen this week where people were using code snippets.

Getting to the heart of the topic I found myself thinking that I may have found my utopia for application development in MVVM. Yes, I know there is no such thing, but this comes closer than any other pattern I have learned about. This pattern allows the application to have better separation of concerns than I have seen before. This is especially true since you can leverage data binding. I’m not sure why it has taken me so long to find time for this subject.

As Paul demonstrated using this pattern with XAML gives you multi-platform reusable code when you leverage common utility classes and ModelView classes. The one drawback I see is that you have to go to the lowest common denominator between the platforms you want to support, but you always have to weigh the trade offs.

And finally, the Visual Studio nuggets just keep coming. Even though it has been available for several generations of Visual Studio I have never seen someone use linked files within a solution. It just goes to show that I should spend more time exploring the deeper features of each dialog.

Windows 8 is here (or at least very close) and that was the main feature of this morning’s key note. Antoine LeBlond started off by apologizing to the IT professionals since he planned on showing code. I’m not sure if IT Pros are that easily confused or why you would need such a disclaimer. Developers do real work, IT Pros just play with toys (just kidding).

The highlights of the Windows 8 keynote for me started with some of the UI design elements that I had not seen when I was shown one of the Build tablets. Specifically I liked the AppBar features that we have become used to with Windows Phone and some of the gesture features. Even though they have been available on other platforms before I think Microsoft really got them right.

Two other great features of Windows 8 that they demonstrated were the Hyper-V capabilities and the ability to run Windows 8 anywhere from a USB key. My jaw dropped through the floor seeing a feature rich OS boot off of a thumb drive. WOW! I also can’t wait to get rid of dual booting just to run Hyper-V images when developing.

The morning continued with a session on Metro XAML development with Tim Heuer. While included a lot of great XAML Metro demos, I was pleasantly surprised by some of the things I found out about Visual Studio 2012. Finding out that Blend is now integrated with VS2012 was a nice addition after working with them as separate applications was an encouraging start.

Moving on to Metro he introduced the nugget that WinRT is Async everywhere. How deep this model goes will be an interesting thing to find out as I learn more about developing for the platform. Thankfully he followed that up with a couple of new keywords, await and async, that eliminates a lot of plumbing that has been required in the past for asynchronous transactions.

Tim also related that since the Metro framework is relatively small and most apps will use a significant amount of it the entire surface is referenced by default. This is a contrast to adding namespace and assemblies one after another as we normally do.

This was such a power packed session that I can’t detail it all here so here is the teaser list.

It is Monday afternoon and the last couple of sessions have been disappointing. I started out in the Nokia: Learning to Tile session. I guess I should have read the summary more closely because it turned out to be more of a Nokia/WP7 history and sales pitch. “I’m outa here!”

I made a quick venue change and now we are learning about Private Cloud Architecture. The topic and the material were very informative. The speaker even had a couple of quotable statements. The first quote was “You can trust me … I’m a doctor”. The second was a new acronym (at least for me): CAVE – committee against virtually everything. I am sure I have dealt with them more than once in my career. Unfortunately he didn’t just have a doctorate, the presentation was overdone like a medical journal. While I didn’t enjoy the presentation, I am looking forward to getting my hands on the slides to review.

It has been a fun first morning at TechEd North America. They keynote was both informative and entertaining. Some of the high points included a walk through of Windows Server 2012 and its new Hyper-V capabilities and use of ODX (offloaded data transfer). Between seeing stats like being able to being able run a Hyper-V VM with 1TB of memory and watching ODX move a 10GB file at a rate of 1GB per second was really impressive.

The fun started when Scott Guthrie was doing his keynote demo and popped up an iPhone emulator from Visual Studio. There is just something wrong with that picture and the WPDev community agreed. This was followed by an iPad emulator and by that time the groans across Twitter were rolling.

Later in the morning The Gu kept us laughing in the Azure Foundations session when he name a server Dude (I believe a suggestion from the crowd). After that I thought I was watching the turtle in Finding Nemo. Duuuuude!

In the expo area the line for the Windows Phone booth was ridiculous. Granted this is a Microsoft event and is sure to be full of MS fan boys, but the only other time I have seen that much enthusiasm for Windows Phones in one place was on the flight down.

I am sure there will be a lot more to get excited about over the next few days. Stay tuned.

I have been blogging on Geeks With Blogs since 2005 and on other blogging sites before that. In this age of Twitter, Facebook and G+ it feels like we are in the post-blog age and yet here I continue. There are several reasons for this. The first is that I still find it to be the best place for self publishing long form thought that won’t fit well on Twitter or Facebook. Google+ allows for this type of content, but it suffers from the same scroll factor as the other social media platforms. If you aren’t looking at the right moment you miss it. On a blog I can put complete thoughts with examples and people can find what they want via key words or search engine.

The second reason I blog is to have a place for me to put information I want to be able to reference back to later. Although I use OneNote which is now accessible everywhere the blog gives me somewhere to refer co-workers and clients when I have solutions for problems I have previously solved.

I know that other people use their blog as a resume builder, but that hasn’t been one of my primary concerns. Don’t get me wrong. Opportunities do come up because you put out well thought out, topical material. That just isn’t one of my top motivators.

I don’t always find the time to blog or even have anything to say lately, but I will continue to produce content for myself and others to learn from and hopefully enjoy.

This being my first time going to TechEd and I want to make sure that I get the most out of it. The first goal is to make sure that I get to the sessions that cover as many topics as possible. This is important for me as a Solution Architect consultant specializing in Microsoft technologies. To this end I have spent some time going through the sessions on the myTechEd site.

The other reason for the trip is to connect with the Microsoft development community. This includes both members of my local Midwest community and the global communities that I have only had online connections with. Sharing the experience and getting a chance to exchange ideas with new and old friends is a great part of any convention.

In any case, the time is getting close and I am looking forward to the trip.

Tim is a Solutions Architect for PSC Group, LLC. He has been an IT consultant since 1999 specializing in Microsoft technologies. Along with running the Chicago Information Technology Architects Group and speaking on Microsoft and architecture topics he was also contributing author on "The Definitive Guide to the Microsoft Enterprise Library".