Since I’m continually running the latest betas or versions of software and add-ins – every once in a while my system will get in a wonky state. I was having the issue where PowerPivot and PowerView weren’t showing up in my Excel 2013 ribbon and noticed a couple of others were having the same issue – If you’ve run into this problem, take a look at the Excel Support Team blog post - Excel 2013 PowerPivot or PowerView add-ins are not available and then follow the directions on updating your registry and you’ll be back in PowerBI heaven using PowerPivot and PowerView.

The Data Mining Add-ins allow you to harness the power of SQL Server 2012 predictive analytics in Excel and Visio and they have been updated to include 64-bit support for Office 2010, and now Office 2013 as well. Use Table Analysis Tools to get insight with a couple of clicks. Or dive into the Data Mining Client for full-lifecycle data mining, and then visualize your models in Visio.

It's great to see a high-performance .net library for writing MPI apps in C#. It would be good to hear about implementations using MPI.net and how it works. It would also be neat to see someone test out MPI.net with F#. Can't wait to see the published paper by Douglas Gregor and Andrew Lumsdaine Design and Implementation of a High-Performance MPI for C# and the Common Language Infrastructure that will appear in PPoPP 2008.

MPI.NET is a high-performance, easy-to-use implementation of the Message Passing Interface (MPI) for Microsoft's .NET environment. MPI is the de facto standard for writing parallel programs running on a distributed memory system, such as a compute cluster, and is widely implemented. Most MPI implementations provide support for writing MPI programs in C, C++, and Fortran. MPI.NET provides support for all of the .NET languages (especially C#), and includes significant extensions (such as automatic serialization of objects) that make it far easier to build parallel programs that run on clusters.

As part of the WorldWide Telescope Apogee release yesterday we also released TeraPixel – the largest, smoothest image of the sky – a spherical image that we believe is the worldslargest image available for anyone to view.

First thing to understand how large a TeraPixel image is – it’s a Million by Million pixels in size – to be able to view an image of that size at it’s highest resolution you would need 500,000 HDTVs. So one challenge was not just stitching the images together – but to actually make it seamless and smooth – and that was the challenge taken on by a small team about 6 months ago.

The TeraPixel project began with data from the Digitized Sky Survey, which is a collection of 1791 pairs of red-light and blue-light images taken over a period of 50 years by two ground based survey telescopes— the Palomar telescope in California, United States and the UK Schmidt telescope in New South Wales, Australia. The Palomar telescope took photographs of the Northern sky, and the Southern sky down to around 30 degrees south. The UK Schmidt telescope took photographs of the rest of the Southern sky.

To create the TeraPixel – there were 3 major computational and data intensive steps:

Create color plates from DSS data

Stitch and smooth images

Create sky image pyramid for WWT

The Vignetting Correction was need to because the corners of end up being less exposed – so a flat field was created to normalize the intensity of the images.

Astrometric Alignment created a new blue plate so that the plates had the same size and that a given pixel location from the two plates referred to the same position in the sky.

The Green channel was created from the two plates and then the saturation and noise were corrected with the new plates created.

To stitch the images together the images were needed to be projected from a sphere onto a plane – a square of 1 million by 1 million pixels.

Due to differences in exposure and simple juxtaposition of the color plates the resulting stitch had undesirable seams. To clean up the image – the smoothing was accomplished by Distributed gradient-domain processing code from a collaboration between MSR (Hughes Hoppe) and JHU (Michael Kazhdan) – the paper appeared in ACM Transactions on Graphics (March 2010)

Finally the resulting sky image was turned into a tiled multi-resolution pyramid. Below you can see the differences in the old image and the newly released one.

All of this large-scale data aggregation was solved with integrated set of Microsoft technologies:

Terraserver was arguably the 1st large database exposed via web services to the Internet for anyone to program against. I’m very interested in seeing how it’s been used in applications/web sites. Ran across another one where Mathematica uses it as one of their examples – Example: TerraServer Explorer

If you know of other apps using the TerraServer Web Services– let me know…

I saw this post about the Wikipedia Explorer using WPF on Steve Claytons blog - it's really easy to use - I like the network mode - seeing all the relevant links - I also like how it adds columns to the page as you maximize the window...I see this as a model for how sceintific data/papers should be viewed.

"Using the latest WPF technologies, Dot Net Solutions has crafted an application to browse Wikipedia which we have dubbed Wikipedia Explorer. Compared to the standard text only view of articles, Wikipedia Explorer deals with and displays the relationships between the articles.With the display of the data, the application allows 3 forms of view. An initial Document layout displays the article's content as it would be displayed in Wikipedia itself. The real value of the application however, is in the extra 3DExplorer and Network view modes.

Within the 3DExplorer mode, the main article is displayed in the centre of the screen with all linked articles shown around in a helix structure for quick navigation. Scrolling through the articles is as easy as scrolling with your mouse wheel."

What you get is a VERY powerful visualization of the flat by hypertexted Wikpedia. The 3D explorer is funky but the Network mode is just awesome. Put in a search term, switch to Network mode and watch the app build out the web of links before your very eyes. I think Tim has done a stunning job here but you can check for yourself as Wikipedia Explorer can be run as a ClickOnce application (note that you do need to install the .NET 3.0 redistributable package)

There is a new release of the Python Tools for Visual Studio and it includes Pyvot: a connector to Excel that allow data transfer and manipulation – check out the tutorial. It also has a PyKinect, to leverage Kinect for new natural user interactions (NUIs)…

M. J. Smith, M. C. Vanderwel, V. Lyutsarev, S. Emmott, and D. W. Purves Computational Science Laboratory, Microsoft Research Cambridge, 7 J. J. Thompson Avenue, Cambridge, CB3 0FB, UK Abstract. The feedback between climate and the terrestrial carbon cycle will be a key determinant of the dynamics of the Earth System over the coming decades and centuries. However Earth System Model projections of the terrestrial carbon-balance vary widely over these timescales. This is largely due to differences in their carbon cycle models. A major goal in biogeosciences is therefore to improve understanding of the terrestrial carbon cycle to enable better constrained projections. Essential to achieving this goal will be assessing the empirical support for alternative models of component processes, identifying key uncertainties and inconsistencies, and ultimately identifying the models that are most consistent with empirical evidence. To begin meeting these requirements we data-constrained all parameters of all component processes within a global terrestrial carbon model. Our goals were to assess the climate dependencies obtained for different component processes when all parameters have been inferred from empirical data, assess whether these were consistent with current knowledge and understanding, assess the importance of different data sets and the model structure for inferring those dependencies, assess the predictive accuracy of the model, and to identify a methodology by which alternative component models could be compared within the same framework in future. Although formulated as differential equations describing carbon fluxes through plant and soil pools, the model was fitted assuming the carbon pools were in states of dynamic equilibrium (input rates equal output rates). Thus, the parameterised model is of the equilibrium terrestrial carbon cycle. All but 2 of the 12 component processes to the model were inferred to have strong climate dependencies although it was not possible to data-constrain all parameters indicating some potentially redundant details. Similar climate dependencies were obtained for most processes whether inferred individually from their corresponding data sets or using the full terrestrial carbon model and all available data sets, indicating a strong overall consistency in the information provided by different data sets under the assumed model formulation. A notable exception was plant mortality, in which qualitatively different climate dependencies were inferred depending on the model formulation and data sets used, highlighting this component as the major structural uncertainty in the model. All but two component processes predicted empirical data better than a null model in which no climate dependency was assumed. Equilibrium plant carbon was predicted especially well (explaining around 70% of the variation in the withheld evaluation data). We discuss the advantages of our approach in relation to advancing our understanding of the carbon cycle and enabling Earth System Models make better constrained projections.

The agenda for MSR eScience Workshop is now available...along with all the fabulous talks and speakers, we are excited to have Tony Hey (VP of TCI) speaking and Jim Gray (MSR BARC) participating. For all you scientists and researchers interested in attending - don't forget to register.

I meant to do an entry on this paper by Stuart Ozer (MSR) and David Kim & David Baker (Rosetta@Home) months ago...it's a great way to integrate SQL Reporting services w/ something like Rosetta@Home, and provide really great service for not only the community users - but also for the researchers using the system. Below is the architecture diagram...

A new generation of computationally intensive scientific research projects relies on volunteers from around the world contributing idle computer time to calculate mathematical models. Many of these projects utilize a common architecture to manage the scheduling and distribution of calculations and collection of results from participants. User engagement is critical to the success of these projects, and feedback to participants illustrating their role in the project’s progress is known to increase interest and strengthen the community. This article describes how one project -- University of Washington’s Rosetta@Home, which predicts and designs the folded conformations of proteins and protein complexes -- created a web-based, on-demand reporting system that graphically illustrates a user or team’s contributions to the project. The reporting service is also useful to the project scientists in assessing the utility of alternative models and computational techniques. The system relies on a comprehensive database platform that includes tools for data integration, data management, querying and web-based reporting. The reporting components integrate seamlessly with the rest of the project’s data and web infrastructure, and the report pages have proven to be popular among both participants and lab members.

Very cool - a light weight way to share applications...brings me back to the NetMeeting days..

There is even integration with Word - could this be the way for academic papers to be written, such that they aren't being emailed back and forth all the time.

If a Microsoft Office Word document is being edited during a SharedView session, the Track Changes feature in Word is automatically enabled, and each change is highlighted with a text identifier indicating which user made the change.

Hold more effective meetings and conference calls

Connect with up to 15 people in different locations and get your point across by showing them what's on your screen.

Work together in real time

Share, review, and update documents with multiple people in real time.

Overview @Sat by Survivorsoft is a nice piece of software that quickly lets you check up on your favorite space missions. With it, you can check the position of various manned and unmanned missions. It will also give you the latest news and pictures from those missions. The Survivorsoft Web site states that you will need to have Windows Mobile 2003 to run currently run the app since it's based on the .NET Compact Framework (SP1). I have tried the program on my iPaq 3765 with Pocket PC 2002. There are some problems and it is quite slow. I'd recommend this only to Windows Mobile 2003 users at this time.

Microsoft C#UNG (pronounced “chung” and short for C# Universal Network/Graph System) is a desktop application that displays graphs, which are collections of vertices connected by edges. C#UNG can read graphs in several file formats, lay them out using one of several layout algorithms, and display them with a variety of display options. An Excel add-in enables graph data entered in an Excel worksheet to be displayed easily in C#UNG. The components used to develop the application are available as an API for developers who want to create and display graphs in their own applications.

Today we’ve made online versions of the Windows for Azure training for researchers available to complement the availability of all the course materials. Also if you have the ability to attend the training in person – there are many sessions coming up.

One of the most hidden features in Word 2007 is the equation editor – it allows you to input equations using the linear format and the equations that are generated are truly visualizing appealing.

There are some videos showing the use of equation editor, but I just see that Murray Sargent is the “star” in a new video walking through some complex equations and showing some of the other formatting/alignment features that are included.

I understand the need to think about legacy applications and that companies must keep their customers happy. So if customers want to use Web Services technologies in order to build their object-oriented systems, why not? Let’s encourage them (e.g., CORBA binding to WSDL). I don’t want to restart the old argument on why WSDL is not yet another object IDL (post 1, post 2, post 3) but it seems that we are treating it as such. I think we are forgetting the importance of SOAP, the importance of the message. The Indigo people get it.So... I would like to start a campaign for the promotion of SOAP, the “love the message campaign” or “love SOAP campaign”. Here’s a ribbon to go with it. What do you think? Can we make this happen? Can we make people believers? Spread the word!!! :-)