Using MSI or a strong name to store .NET apps on a network server (Part 1)

Using MSI or a strong name to store .NET apps on a network server (Part 1).

Introduction

Before .NET, my company had several network servers that were used as a place to store applications. If you wanted to use one of these apps you had to have a mapped drive to the network server. Then you could launch the app from there. So the EXE was stored on the network, but the app would still run locally when launched. We used Delphi so there was only the EXE, no DLLs to install, register etc. Now with .NET, the Windows apps have to be installed locally for them to be fully trusted. There are several publications that list all of the things that are not trusted from an application that is not installed locally. Some examples are, no file IO, no access to registry, no access to database stuff, the list goes on. Installing an application locally is a pain especially when it comes to releasing a new version. It needs to be released to everyone who has already installed it. It is a lot nicer just to release it to one spot on the network. Of course, you do have to make sure no one is using the app when you release it to the network. Or you have to bounce them out of the app. Still you only have to release it to one place.

Background

So there are options in .NET for distributing applications. If you have a local intranet you can host an ASP.NET application. This can be somewhat limiting for extremely complex applications that are better suited for a Windows style application. It is not always easy to release something new on the web either even if it is your local intranet. You can use a web server to help distribute the application. Something like: it checks to see what version you have locally and if there is a newer version it is installed locally and then launched. You can use terminal server. The application only exists on the terminal server and your users have to log into the terminal server to access the app. We have tried all these approaches and personally I don't care much for them. So on to the next two. First, the one I like best, create a machine level runtime security policy. Then put it into a MSI script and have your users run the MSI script (they only have to once), or have Active Directory distribute the MSI script at login. This policy marks a network server and folder on that server as fully trusted. The other option is to use the SN.exe tool to create a strong name key and mark all of your applications with the strong name key and then create a runtime security policy allowing anything with that strong name key to be fully trusted. In part 1 of this article, I will only be reviewing how to do the machine level runtime security policy that marks a network folder as fully trusted.

How do you do it...

First, let me show you the error you get if you try to launch a .NET app from the network when it is not fully trusted.

Next, I will describe how to do the run time security policy that marks a network server and folder as fully trusted. This is something we are currently doing and it seems to be working well. You set up a machine level runtime security policy in the .NET framework configuration tool, which can be found in Control Panel->Administration tools. Note that if you are not an Administrator on your PC, you will not be able to set a runtime security policy. Under Runtime Security Policy\Machine\Code groups\All code\localIntranet_zone\, you want to add a child code group.

Put in a name and comment for the new code group and click next.

Now you need to select the URL from the drop down and put the servername\foldername in the URL textbox. It should follow this format: file://SERVER/Folder/*. Then click Next.

Next you need to select what security you want this folder to have. Full trust is what I usually pick, but there are several options. Then click Next and finally click Finish. At this point you would be able to launch a .NET Windows app from the server\folder you just set up in a code group and you would not get a security violation.

Creating the MSI script

On to creating the MSI script for this code group. Click on the runtime security policy. You should see an option for creating a deployment package. Click that option.

Next you should select the machine on the radio group and put in a local path and name for the MSI script.

Click Next and click Finish.

Conclusion

Now you have an MSI script that people can run, they only have to do it once, and they will be able to launch a Windows .NET app from a network server and they will not get a security violation. Note, when the MSI script is run, it will replace any current runtime security policies on the machine level. You can distribute the MSI script over Active Directory, but I will not include that in this article. In part 2 of this article, I will discuss how to do something very similar to what we did in this article except we will be using a strong name key instead of a URL to grant full trust rights to an application.

Share

About the Author

I started my programmer career over 18 years ago doing COBOL and SAS on a MVS mainframe. It didn't take long for me to move into windows programming. I started my windows programming in Delphi (Pascal) with a Microsoft SQL server back end. I started working with vb.net when the beta 2 came out in 2001. After spending most of my programming life as a windows programmer I started to check out asp.net in 2004. I achieved my MCSD.net in April 2005. I have done a lot of MS SQL database stuff. I have a lot of experience with Window Service and Web services as well. I spent three years as a consultant programing in C#. I really enjoyed it and found the switch between vb.net and C# to be mostly syntax. In my current position I am programming in both vb.net and C#. Lately I have been using VS2012 and writing a Windows 8 app. You can search for the app it is called ConvertIT.

On a personal note I am a born again Christian, if anyone has any questions about what it means to have a right relationship with God or if you have questions about who Jesus Christ is, send me an e-mail. ben.kubicek[at]netzero[dot]com You need to replace the [at] with @ and [dot] with . for the email to work. My relationship with God gives purpose and meaning to my life.

That is a good point. I suppose I should have added something like that. I guess I figured if you understand this stuff well enough to install it, it is probably a good idea to just install it manually the first time. Otherwise, it has the potential of becoming a black box that nobody really understands. That can be a problem if things stop working or if you want it to do something different then what is in this article. Still I guess I could have given the reader the choice. Thanks for you comments.

You know I probably should have mentioned this in my article. Anyway, I thought I would put it here. There are two reasons to want to mark a network share as fully trusted. One is as I mentioned in the article to put applications there. The other is so that your development projects can reside on a developer network share. You might ask, why would you do that? There are several reasons.

First most peoples network servers are backed up nightly. It would be surprising to find someplace where the Developers PC is backed up nightly. Most people use some sort of source control, but who actually checks in their progress every day? This way it is automatically backed up.

Second, this one is not always all that popular, other developers can see your code. I know it is probably in some source control program. I ask, what is easier? Trying to take something from a source control program and get it to compile? Only to find not everything was checked in etc. Or to go to the network share where the developer has been compiling their work and just be able to look at the code from there.

Anyway, those are the two biggest reasons that we use this technique to mark a developer network share so that all .net windows development can be saved on the network share.

Why don't you really use a local installation folder ?
Years ago my company (we used delphi, too) had the same problem... we put installation in a net share (UNC or mapped drive). We had a lot of problems so we decided to xfer programs in a local folder and to access the net only when needed.
Our two main problem was
- the net must stay online: you have to break all clients if you need to do maintenance on the server
- when you do an upgrade you also have to break all clients. If you use a local folder you youl inform the user that an upgrade is available and the he (or his admin) will decide when to upgrade

This two problem may be tolerated in a small environment (5 people), but if you have 20 or 30 customers working it is not possible to tolerate a "global break".

At the moment we use the net as a repository, we put all upgrades on the net installation folder and the each app, at startup, checks the net install.
This seems to me the best solution.

It sounds like you have a 7-24 situation where you need your applications up all the time. Our offices are closed on Sunday, so we are able to do network and server maintenance Saturday night and all day Sunday if needed. NOTE: we do have web apps that need to be up 7-24, but we are not taking about web apps here. Our network is very stable. The only time we have had problems is when our T1 provider has had issues, or if a construction crew has cut some lines. In that case the applications wouldn't be much use anyway since they need database servers to do any real work.

It sounds like a local installation folder is working well for you, in our case it is easier to just have one copy that we know all users are using. We have one application where we decided to do exactly the same thing you are doing. We have a network repository and the app checks to see if there is a new version. If there is a new version it installs it. This application happens to be the biggest pain to release or roll back. I don't think we have any intension of releasing another application that way.

You're correct... rolling back an installation is a pain, but I think that in our case advantages are more that disadvantages.
You're correct also about our business: our customers works 24 hours a day, 7 days a week, so we need this kind of installation.

This is from a systems engineering standpoint. If you're going through all the trouble to build an MSI package, at the very least you'll still need to push something to the desktop (like a shortcut, and a dynamically-mapped drive). Why not just build the client piece (as an MSI) to deploy to the desktop instead? It's very easy through AD anyway! As far as kicking out users, you can kill the share easy enough.

I deal with quite a few apps that still demand a drive mapping and I strongly believe it's an archaic way of doing business. I work in a 7500 workstation enterprise, so you could imagine what it'd be like if all of our 200+ client/server apps needed a drive letter!

My experience with tells me that, while younger programmers (you) are able to grasp newer coding methods, older veterans (your boss) tend to be quite happy with the status quo (uh, it works, don't it?). So I can certainly see your craftsmanship as a programmer, but the bigger picture is pretty much staring you in the face: Go fat code/UNC (or deploy through Metaframe) and lose the drive letter. MSI packages are a godsend to those of us who are always under pressure to do more with less, so take advantage of that! Go convince your boss that it's time to revamp the code!

First, we go through the trouble of creating a MSI package once. It gets distributed once and then the user is set. Not all users will have database access to apps that we release. So automatcally creating a shortcut is not something we want. It sounds like the size of your company is about 7 times that of mine, so perhaps different solutions for different sizes of companys.

Actually this solution doesn't require a mapped drive. The user just needs access to the UNC path. Some users still chose to map a drive since that is what they used to do. Still all they have to do is create a short cut (or have helpdesk help them) with the unc mapping the program or go to start run and pick it from the list (if they have typed it once).

You have some valid points although depending on your network bandwidth and the number of people you will be sending the msi script to, the size of the MSI script, you could be looking at an all day event to get it out. Our network is pretty spread out with several remote facilities (six different states).

On another subject, the release coordinators we have are not admins of the network and have no programming skills. They like to be able to control release and roll back of applications. So in your suggestion the msi gets rolled out to everyone's pc, now they got the new version. Oops something is wrong time to roll back. Time to send out the previous msi script to put the old app back. Doesn't sound too fun to me. That type of stuff shouldn't happen, but it does.

Finally, I may be a young programmer, but it still doesn't sound like a real good idea to just turn off the share when you want people to get out of your apps. I certainly wouldn't want that to happen to me while I was in the middle of doing my every day job. You can bet that would cause the help desk phones to light up. You can imagine the phone calls. "What the @#@ just happend to my program?", help desk - "Oh you know, the @# programmers just released a new version."

Of course, if the app you just released is used by any management it could be worse...

So bottom line. I agree with your general argument that active directory should be used more. Still, am I ready to suggest an architecture that all .net windows apps are distributed through it? No, I prefer something that allows the release coordinators to have easy control over getting apps out. They know the app is in one place and that is all they have to worry about.

Have any of your analysts done any bandwidth studies comparing data requests/transfers versus an app itself running over the network? I'd be curious to know.

And I certainly hope your release coordinators would properly notify everyone affected by a downtime before pulling the rug out from under them.

To me though, it just sounds like a fat client/disconnected dataset would be an overall easier way to manage an app. As far as installs, it's quite easy to set up distribution points. Copy the app once to your install share at each site and configure a GPO that affects the OU/Site.

You are certainly correct with the main point of all this: We all (coders, system engineers, project managers, release coordinators, and the little gray men who wear ties and sign our paychecks) strive for the best control of an app we can get. That allows us to do more with less.

That is the beautiful point. Even though the app resides on a network share it is still running on the local PC. So other then, the exe being marked as an open file, there is no extra bandwidth.
Now as far as data access… All of our applications are very heavy with data access. They need the most up to date lookup data. The data they are changing needs to be in the system right away so that it can be processed by automated database systems and then sent to production. We have something like 53 sql servers with multiple databases. I don't think that a disconnected dataset would work real well. I should mention we don't always keep our database connections open, but the applications are often querying and updating, inserting into the databases.

As with anything it seems there are many possible solutions and the trick is finding the right one for your company needs.

Configuration. A lot of apps these days need to talk to a database server. If the configuration of which server and database is on client side, then you are in the situation again where a msi script must be used to switch which server and database is used. We have configuration files that reside in the same directory as the exe on the network share. Again one place to change to affect everyone's configuration.

Oooh, that's an easy one. Have you ever tinkered with the ADM files that support Administrative Templates in a GPO? I think you would really enjoy that. You can specify any app setting you could imagine. It needs to live under HKLM\Software Policies and you could start an easy-to-manage registry key using your own company name (on a default build, only Microsoft is there) and you can administer it using an ADM file. Of course you'll need to specify the same keys when building an MSI package for standardization. All you need in your code is a pointer to those keys.

Start by getting an ADM file from a 2003 SP1 or XP SP2 box and open it with Notepad (I recommend starting with wuau.adm in the Windows\INF folder because it has a nice variety of UI controls). Takes about 5 minutes to figure out the mechanics and once you're done, you can import it like any other Administrative Template (right click on Administrative Templates). Remember to test on a Local Group policy before importing into AD.

This will allow you to control any registry-based setting (like server and database pointers) through GPO effortlessly. Yup, anything to make my job easier!

Thanks Rob those are some good tips. I will have to look further into that option. A while back we controlled a lot of things through the registry, but since then we have been moving back toward central config files.

From my own, sometimes skewed, perspective, I think it'd be more valuable to see how to fully trust a public key on a strong named assembly. This way, you can fully trust your home grown apps (or a trusted vendor) regardless of where they're deployed without trusting everything on a particular location.