Microsoft is launching Windows Phone 7 today and I fear it may be too late. I, like many Microsoft watchers, was an early adopter of Windows Mobile. I used my Windows Mobile 6.x phone until it broke, and even then, I put tape over it and got another 6 months out of it.

I held out until I arrived in Hong Kong last summer to buy a new phone, a HTC Magic running Android 1.7. I liked it so much, I was one of the first people in Hong Kong to buy a Nexus One back in January-this after I had one of the first sneak peeks at WP7 on Microsoft’s campus in January.

As I used my “Google Phone” as I call it, I soon became dependent on Gmail for my main email communication, and other Google applications for my other daily chores. I use to use hosted Exchange and POP with Outlook and Outlook Web Access, now I am 100% Gmail, I don’t even have Outlook installed on my machine. I frequently show a Hong Kong taxi driver where I am going on Google Maps and use Layer (and a little Foursquare) to find new places to eat and such in Hong Kong all the time. I use the camera on the phone so much that I don’t even carry my point and click anymore. I sit at Starbucks and use the WiFi hotspot from Android 2.2 to work all day. The list goes on.

Google lured me in with a new phone and then before I knew it, I was deep inside the Googleplex and outside of the Microsoftplex.

Today, Windows Phone 7 ships. The problem with Windows Phone 7 is that it is one year too late. Last summer all the people like me with a Windows Mobile 6 phone had a new contact, or broken phone and went with an iPhone or Android. Those customers may never come back. I may not.

Last week I wrapped up a successful high altitude trek and climb in Nepal. I did this to help raise money for Education Elevated, a charity that is building a school and library in rural Nepal.

Kids first, then mountains: We visited two Hillary Schools on the trek, one in Khumjung and the other in Junbesi. We started in Jiri and it took us four days to trek to the first school in Junbese. It then took us another four days to reach Khumjung. I scoped out the IT training. ;)

As always the kids are super cute.

After the schools, it was time to gain altitude. (Ok both schools are over 3000m/10000’.) After we visited the Hillary School in Khumjung, we started to gain altitude and the scenery was stunning. We climbed the Gokyo Valley side, not the Everest side that I did in 2008. Even our sherpa’s geeked out and took tons of photos.

We trekked through several small villages over three days. Our sherpa were lazy. :)

After three days we reached the town of Gokyo (15,000’), right on the third glacial lake.

The next morning we climbed Gokyo Ri (5400m/17500’) and the views were amazing, the best I have ever had in Nepal. If you are planning a trip to Nepal, make Gokyo your base and Everest Base Camp your secondary target.

The cool thing about being on top of Gokyo Ri is that you have a 360 paranoiac view of the Everest portion of the Himalayan Range. Here is a video I shot trying to show it off.

We started the climb at 5:15am, I argued with Ngima Sherpa and Kathleen about that, but lost. We got up in about 1.5 hours, just in time for the sunrise. While that is cool, the sun rises directly over Mt. Everest, so the photos (and video above) are not perfect. Sorry, if you want to see a better show of Everest, you will just have to climb Gokyo Ri as well. ;)

After a short rest, we trekked to a village below the Cho la pass (5400m/17500’). The pass was very, very hard, straight up in the snow for over an hour, a lot of the time you are on your hands and knees.

Look at me climb (Yes the is Bollo Sherpa behind me, he did it in sneakers):

Our shepra’s are too cool for school on top of the pass. (But they did take the Cliff bars I gave them.)

The view from the pass is pretty stunning:

Once you are up over the pass you think the hard part is over, right? Ha! Now you have to walk in-between two Himalayan peaks (taller than anything other than about 15 mountains in the world) in the snow over a crevasse ridden field. (Thank god for my crevasse rescue training in Alaska! I had to teach Kathleen how to probe for crevasses!)

After a few hours in the snowfield, we finally reached our destination. After about 7 hours of climbing and trekking, it was about 1pm (we started super early to avoid the wind and sun making the crevasses unstable.) After lunch, we pushed on 3 more hours to Loboche village at about 5000m/16000’.

The next day we pushed on to Everest Base Camp and then headed down the mountain.

A few days later we were back in Kathmandu dreaming about our next Nepalese trek!

On Monday I am leaving for a 3 week trek to visit the Hillary School in Kumjum, Nepal as well as Everest Base Camp. The goal of this trip is to raise money for the Education Elevated charity that I have been working with (see last year’s school work project write up here.)

We’ve raised a ton of money so far, but you can still donate here. For those of you who have donated already, thanks!

If you use Telerik software, why not tell the world how much by voting for the 2010 DevProConnections Community Choice Awards. Voting is open now and Telerik is nominated in 16 of the total 26 categories! While it’s great to take home awards like our recent Best of TechEd 2010 trophy, there is something unique about letting the voters on the Interwebs determine your fate.

And then remind your co-workers, Twitter followers, Facebook friends, LinkedIn connections, DNUG members, and anyone else you see to vote in this year’s awards, too. To help you successfully cast your ballot for Telerik’s nominations this year, here is a quick voting guide for the categories where you’ll find us:

On Saturday, November 6th, 2010, from 8:00 AM until 6:00 PM, the Fairfield / Westchester developer community will be holding their 4th annual Code Camp! The event will be hosted by CITI, a unit of The University of Connecticut School of Business, on the Stamford, CT Campus.

The continuing goal of the Code Camps series is to provide an intensive developer-to-developer learning experience that is fun and technically stimulating. The primary focus is on delivering programming information and sample code that can be put to practical use. The event is free.

This is an event by the developer community, for the developer community. The content is original and developed by you. Let's work together to make this event a success.To apply for a speaking slot, please first register as a speaker here: http://bit.ly/fwccspeaker.

Then, with the email address you registered with on the speaker page, please add as many abstracts as you’d like here: http://bit.ly/fwccsession. Submit on anything you’d like related to .NET development.

Subject: You must register athttps://www.clicktoattend.com/invitation.aspx?code=149443 in order to be admitted to the building and attend.Everyone knows that they should be writing better test cases for their applications, but how many of us really do it? In Visual Studio unit testing is an integrated part of the development environment. So there is no longer any reason to avoid not doing test driven development and automated unit testing. In this seminar you will learn how to architect your applications to make testing quicker and easier. You will learn to use the tools in Visual Studio to help you do the testing.You will Learn1. How to architect for test driven development2. Creating test cases3. Using the Visual Studio Unit Testing tools.

Speaker: Paul D. Sheriff is the President of PDSA, Inc. (www.pdsa.com), a Microsoft Partner in Southern California. Paul acts as the Microsoft Regional Director for Southern California assisting the local Microsoft offices with several of their events each year and being an evangalist for them. Paul has authored literally hundreds of books, webcasts, videos and articles on .NET, WPF, Silverlight and SQL Server. Paul can be reached via email at PSheriff@pdsa.com. Check out Paul's new code generator 'Haystack' at www.CodeHaystack.com.

In Part I we looked at the advantages of building a data warehouse independent of cubes/a BI system and in Part II we looked at how to architect a data warehouse’s table schema. In Part III, we looked at where to put the data warehouse tables. In Part IV, we are going to look at how to populate those tables and keep them in sync with your OLTP system. Today, our last part in this series, we will take a quick look at the benefits of building the data warehouse before we need it for cubes and BI by exploring our reporting and other options.

As I said in Part I, you should plan on building your data warehouse when you architect your system up front. Doing so gives you a platform for building reports, or even application such as web sites off the aggregated data. As I mentioned in Part II, it is much easier to build a query and a report against the rolled up table than the OLTP tables.

To demonstrate, I will make a quick Pivot Table using SQL Server 2008 R2 PowerPivot for Excel (or just PowerPivot for short!). I have showed how to use PowerPivot before on this blog, however, I usually was going against a SQL Server table, SQL Azure table, or an OData feed. Today we will use a SQL Server table, but rather than build a PowerPivot against the OLTP data of Northwind, we will use our new rolled up Fact table.

To get started, I will open up PowerPivot and import data from the data warehouse I created in Part II. I will pull in the Time, Employee, and Product dimension tables as well as the Fact table.

Once the data is loaded into PowerPivot, I am going to launch a new PivotTable.

PowerPivot understands the relationships between the dimension and fact tables and places the tables in the designed shown below. I am going to drag some fields into the boxes on the PowerPivot designer to build a powerful and interactive Pivot Table. For rows I will choose the category and product hierarchy and sum on the total sales. I’ll make the columns (or pivot on this field) the month from the Time dimension to get a sum of sales by category/product by month. I will also drag in Year and Quarter in my vertical and horizontal slicers for interactive filtering. Lastly I will place the Employee field in the Report Filter pane, giving the user the ability to filter by employee.

The results look like this, I am dynamically filtering by 1997, third quarter and employee name Janet Leverling.

This is a pretty powerful interactive report build in PowerPivot using the four data warehouse tables. If there was no data warehouse, this Pivot table would have been very hard for an end user to build. Either they or a developer would have to perform joins to get the category and product hierarchy as well as more joins to get the order details and sum of the sales. In addition, the breakout and dynamic filtering by Year and Quarter, and display by month, are only possible by the DimTime table, so if there were no data warehouse tables, the user would have had to parse out those DateParts. Just about the only thing the end user could have done without assistance from a developer or sophisticated query is the employee filter (and even that would have taken some PowerPivot magic to display the employee name, unless the user did a join.)

Of course Pivot Tables are not the only thing you can create from the data warehouse tables you can create reports, ad hoc query builders, web pages, and even an Amazon style browse application. (Amazon uses its data warehouse to display inventory and OLTP to take your order.)

In Part I we looked at the advantages of building a data warehouse independent of cubes/a BI system and in Part II we looked at how to architect a data warehouse’s table schema. In Part III, we looked at where to put the data warehouse tables. Today we are going to look at how to populate those tables and keep them in sync with your OLTP system.

No matter where your data warehouse is located, the biggest challenge with a data warehouse, especially one where you are going to do real time reporting off of, is that the data is published from the transactional system (OLTP). By definition, the data warehouse is out of date compared to the OLTP system.

Usually when you ask the boss, “How much latency can you accept between the OLTP and data warehouse systems” the boss will reply: none. While that is impossible, the more time to develop and the more money you have will to develop said system, the closer to real time you can get. Always bring up the American Express example used in Part I for proof that your system can accept at least some latency.

If you remember the code examples from Part II, most of the time you have to query OLTP data, aggregate it, and then load it into your new schema. This is known as the process of extraction, transformation, and loading (ETL) data from the OLTP system into the data warehouse. The workflow usually goes like this: at a pre-set time (at every change, hourly, nightly, or weekly) query the OLTP data (extraction) and then aggregate and flatten it out (transformation) and then copy the transformed data to the data warehouse (star or snowflake) tables (load).

You have several options ranging from near real time ETL (very complex and expensive) to nightly data dumps (least complex and least expensive.) You can also perform full ETL (wipe out the data warehouse tables completely and refill on each operation) and partial ETL (only add incremental data as per a time series.) The actual load can be done via database triggers on an add/update/delete for near real time to simple scheduled SQL batches that wipe data and run your load scripts. Most likely you will want to take a hybrid approach, depending on the maturity of your system. You have three basic options, ranging from least complex to most complex:

Direct database dump

ETL tool

Database triggers

When you have a long time series to publish your data, say nightly, or weekly, you can do a direct database dump. The process would be pretty straightforward. At a regular interval (or manually) a process would start that would query the OLTP database and perform all of the aggregations, etc and then load it into a staging data warehouse database, then wipe out the production data warehouse and load the data in.

Another option is to use an ETL tool. A good example is SQL Server Integration Services (SSIS) if you are using Microsoft SQL Server. (Actually SSIS will work with multiple database, you just need a SQL Server host.) A modern ETL tool will give you the ability to segment the work into logical groups, have a control flow based on success and failure of a condition, and allows rollbacks.

A typical workflow with an ETL tool is that the ETL will run on a schedule (or based on a condition, such as a message arriving from a queue or a record written to a admin table) and have a parameter(s) passed to it. This parameter is usually a time series and the ETL will perform all of the extraction on data from the OLTP database filtered by that parameter. An ETL tool is the most likely solution you will employ, especially if you have to make frequent updates from your OLTP system to your data warehouse.

Another option is to use database triggers. For those of you that don’t know a lot about triggers, well they can lead to evil. ;) That said, they are events that fire when data changes. You can then write SQL code to run when the data changes, even ETL code. Triggers are hard to debug and difficult to maintain, so I would only suggest that you use a trigger when you need “real time” updates to your data warehouse and even then, the trigger should only write a record into an admin table that your ETL process is polling to get started.

A common design pattern with a sophisticated ETL is to do all of the ETL to a staging data warehouse and allow a check mechanism to verify that the ETL was successful. Once this check is performed (either by computer or by human, depending on the system), you can then push the data from the staging tables to the production tables. There is a SQL operator specifically for this process, the MERGE operator. MERGE looks at two tables and looks at their joins and compares the data and will do add, updates, inserts, and deletes to keep the two tables in sync. Here is how we would do that with a MERGE statement and our fact table from Part II.