I know this is a very basic question. If someone could humor me and tell me how they would handle this, I'd be greatful.

I decided to post this because I am about to install SynchToy to remedy the issue below, and I feel a bit unprofessional using a "Toy" but I can't think of a better way.

Many times I find when I am in this situation, I am missing some painfully obvious way to do things - this comes from being the only developer in the company.

ASP.NET web application developed on my computer at work

Solution has 2 projects:

Website (files)

WebsiteLib (C#/dll)

Using a Git repository

Deployed on a GoGrid 2008R2 web server

Deployment:

Make code changes.

Push to Git.

Remote desktop to server.

Pull from Git.

Overwrite the live files by dragging/dropping with windows explorer.

In Step 5 I delete all the files from the website root.. this can't be a good thing to do. That's why I am about to install SynchToy...

UPDATE: THANKS for all the useful responses. I can't pick which one to mark answer - between using a web deployment - it looks like I have several useful suggesitons:

Web Project = whole site packaged into a single DLL - downside for
me I can't push simple updates - being a lone developer in a company
of 50, this remains something that is simpler at times.

Pulling straight from SCM into web root of site - i originally
didn't do this out of fear that my SCM hidden directory might end up
being exposed, but the answers here helped me get over that
(although i still don't like having one more thing to worry about
forgetting to make sure is still true over time)

Using a web farm, and systematically deploying to nodes - this is
the ideal solution for zero downtime, which is actually something I
care about since the site is essentially a real time revenue source
for my company - i might have a hard time convincing them to double
the cost of the servers though.

--> finally, the re-enforcement of the basic principal that there needs to be a single click deployment for the site OR ELSE THERE SOMETHING WRONG is probably the most useful thing I got out of the answers.

UPDATE 2: I thought I come back to this and update with the actual solution that's been in place for many months now and is working perfectly (for my single web server solution).

As you can see this brings the site down, uses robocopy to intelligently copy the files that have changed then brings the site back up. It typically runs in less than 2 seconds. Since peak traffic on this site is about 2 requests per second, missing 4 requests per site update is acceptable.

Sine I've gotten more proficient with Git I've found that the first four steps above being a "manual process" is also acceptable, although I'm sure I could roll the whole thing into a single click if I wanted to.

The documentation for AppCmd.exe is here.
The documentation for Robocopy is here.

You should consider SSHing into your server instead of using a remote desktop, that way you can script a deploy if you want.
–
MalfistAug 2 '11 at 19:10

3

I think this question is fine to stay here. Deploying a website seems to me like a developer concern more than a sysadmin job, though I guess that could vary by company. Still, all answers so far as developer-focused.
–
Anna Lear♦Aug 2 '11 at 19:48

1

@Malfist -- how exactly does one ssh into a windows server?
–
Wyatt BarnettAug 2 '11 at 21:30

@Wyatt By running an SSH server on the remote machine, (almost the) same as you would on a Linux box.
–
Anna Lear♦Aug 3 '11 at 2:28

when you use web deploy, you lose the option to hotfix, no? I know there are plenty of people that don't want to do something so outside formal process but I like having the option for emergencies...
–
Aaron AnodideAug 5 '11 at 16:43

You do, good point. I know at my company we don't want people to do that, because we can't track changes. For personal sites I really really like having that option though.
–
NateAug 5 '11 at 16:48

Typically what I do is I keep everything in an SVN repository. When I'm finished with some changes, I commit on the dev site, and then checkout on production. Keeps everything synced, it's fast and easy. If checking out is too much of a hassle, you can setup Apache with WebDAV and it will do it for you.

For each of my web apps I have a git repository setup with three branches. Live, Beta, Features. Live is of course the live site. Beta is the site used to fix bugs or for final testing of features right before implementation. Then as you said i do a simple git push, git pull on live to pull the info in. Features is used for "next version" enhancements.

At my previous employer, to deploy code changes, we'd set the load balancer to stop servinng to one webserver. It might take 20 minutes for sessions on the first web server to expire. We'd update the code on that webserver by unzipping the deployment zip file, then check that things run ok by hitting the direct IP address for that first web server. When we're convinced it works ok, then we'd set the load balancer to hit the now updated webserver and wait for the sessions to expire on another server and then update that one (and so on until all of them were updated). After they checked out ok, we'd set the load balancer to go back doing its job. This got complicated when we had as many as 10 web servers connected to the load balancer during peak seasonal loads (so updating them one-by-one could take hours because we could not shut down the live website - customers had to be able to get to the site).

In ASP.NET, if you drop any file named App_Offline.htm into the root directory of a website, that websiten unloads which will let you then update the DLLS (and whatever). IIS will serve up a page titled "Application Offline". When the file is removed, renamed or deleted, the web application will restart and IIS will serve up web pages to that website. This is what Visual Studio does when you publish a website from inside VS.

My previous employer used a custom job control system to handle deploying software, restarting, etc. Even if it was available, it was way overkill for your needs.

The employer that I had previous to that had custom scripts to copy data from subversion out to servers and do a rolling restart.

Several places I saw previous to that had make files to manage the deployment. They generally used a strategy like cutting half the webservers from the load balancer, waiting, stop that half, rolling code out, restart them, then flipping the load balancer, wait, stop the other half, restart them, then bring the load balancer back up.

At all places that I have worked, rolling out code was either a single command, or the lack of it being a single command was recognized as a problem to be fixed.

Typically what we use to solve this problem with our websites is a tool included with Systems Internals called junction.

Using this tool we are able to create links from one directory to another. The app root on the server contains 3 folders. Red, Blue, Current. IIS is configured to always look at Current for it's files.

You can issue a command junction current that will tell you what folder current is currently pointed at. Say for instance that it was currently pointing at Blue. What we would do is queue up the files in Red for the new deployment and make sure all configuration was ready to go.

Once we are ready we can issue the command junction current red to have it re point.

There are two things that make this solution so great

1) You have all the time in the world to queue your changes up in the folder. No rush and the only down time is when the app pool is spinning up. (There is a way to pre compile this step as well.)

2) If something goes wrong with your deployment all you have to do to roll back is issue a command instead of trying to revert changes. The command in our case would be junction current blue

Hopefully our way of doing things can shed some light on a new solution for you.

The reason being is that you can simply type 1 command to roll out a website and then simply type 1 command to roll it back. And if you are thorough enough this will include your database schema migrations. (Migrator.Net)

One of the main reasons not to use SVN or GIT to deploy is that environment can change between production and staging. In your NANT script you can have it build your .config files specifically for the environment that you are targetting. This saves forgetting to enter a config setting in production.

It also automates the entire process so it becomes a one command affair, and any number of manual processes becomes 1 simple process.

What I have done and I am not sure if you have the scope for this, but here goes. A developer would check code into a QA branch it would then be loaded to a QA environment by a systems engineer, once it has passed QA it would be promoted to the production branch. There were at least 2 of every site connected to a server on a load balancer odds and evens in this case, one of the two servers would be taken offline iis would be stopped the site would be archived and the new site would be deployed along with any iis changes required iis would then be restarted and you would then move onto the next server. This was all scripted using C# in our case but it was done in the past using vb script. I hope that helps. Cheers