Application Lifecycle

As a company, we have committed ourselves to using TFS as a core component of our development environment. At the moment, any automated testing we do is done on the build server and is substantially at a unit test level. I perceive the need to expand this into system testing and therefore deploying onto multiple machines (client, web server, database). To this end, I have started looking at Microsoft Lab Management.

It looks like it has potential, but all the demos I see seem to show a simple build of a single module, deploy and test. Whereas in my company we have multiple teams deploying multiple modules having dependencies between them.

As an example, say a change to a service requires a database change as well. I can get the build of the service to kick off the Lab Management deployment, and I understand that I can add a database deployment script to the Lab Management build but it does not really seemed joined up especially as we start to see Visual Studio SQL Projects that will have their own builds. This is a simple example whereas in real life we may have multiple services and databases that need to be aligned for a complete system deployment.

I suspect I am asking too much of the Lab Management product. Any comments on how you use Lab Management?

I am a new member to this site and also new to programming. Recently I have been tasked to develop a new application with a group of developers.

I would like to ask if there is any way where all the programmer can work on the same project hosted on a server. Currently we all develop different section of the project on our own laptop and then we copy and paste to compile on the server one by one.

Is there anyway we all can work on the project hosted on the server at once

Are each of you in fact working on the same 'project' which would be defined as the same code?
Or are each of you working on a different 'project' thus there is no code overlap? For example if each of you is creating a server which interacts with other servers.

Thanks Robert and jschell for the reply, I dont think we will have code overlap because even though we are working on the same project but we are all coding different functionality.

For example current we are working on the USER module, and under this module we have the following task to code,

1) add user,
2) change password and etc....

each task is assigned to a programmer and they write their own code. What I want to know if we can all open the same project but work on different codes and compile and test code.

I have just tested this by sharing the portect over the network and all the programmers working on it. Coding works well but when any 1 programmer wants to debug or compile then the other programmers are affected.

Any software that can help all programmers work independently on the same project without affecting others while compiling or debugging.

each task is assigned to a programmer and they write their own code. What I want to know if we can all open the same project but work on different codes and compile and test code.

Just to be clear that isn't really a source control question.

Presuming you are working in C# and you really want each developer independent then each developer would work in their own assembly.

Code used to interface between assemblies would also be in its own assembly. For example, but not limited to this, you might have an assembly that by design contains only interfaces.

Then you have one project which references each assembly.

Natch everything is checked in.

The only gotcha to above is when someone changes the project, for instance adding yet another assembly, then each other user must refresh to insure that they too do not add another different assembly (merging such cases is possible but messy.)

eddy_fj wrote:

I have just tested this by sharing the portect over the network and all the programmers working on it. Coding works well but when any 1 programmer wants to debug or compile then the other programmers are affected.

That isn't going to work.

Each developer has their own copy of the code. When the code they are working on WORKS, then they check it in (doesn't have to be complete but must compile and be functional to some extent.)

Then other developers check out the updated code. The other developers do not need to do this immediately but they should do it fairly often (once/twice a day) to insure that they do not become out of sync with others. However that actually depends on how interdependent modules are.

eddy_fj wrote:

without affecting others while compiling or debugging.

Just to make sure it is clear from the above
1. There is source control. All of the developers have access to that.
2. EACH developer extracts the full set of code from the source control
3. EACH developer works on their machine and only their machine.
...a. They compile on their machine
...b. They edit on their machine
...c. They debug on their machine
...d, At some point they decide that some piece of code is ready to be added to source control.
4. Then the developer checks the code into source control.
5. The source control will allow each developer to synchronize with source control and thus get updates WITHOUT overwriting code they are currently working on.

As a suggestion maybe you should by a book specifically for one source control system (probably git) and read it.

So you are using visual studio. Which language are you using? I asked this because if you are using C# then by using Partial class concept you people can create seperate files and check it in.
Due to this the implementation of the functionality can be distributed, however it will be bound to the same class at execution time.

Historically, the software libraries (mostly C) developed by my group were accompanied with the PDF datasheet, describing the purpose, theory of operation, some block diagrams and API. That documentation was constructed manually in the MS WORD file. Now, we are transitioning to Doxygen to get rid of manual effort required to extract the comments from the sources and put them into the MS word document. This is fairly easy to do, but the problem I have is to put the remainder of the MS Word (purpose of the library, theory of operation, block diagrams etc) into the same CHM generated by Doxygen.

Due to huge backlog of the custom written documentation (datasheets in MS word), I cannot afford manual operations. So I am looking for the ways for my output Doxygen CHM to contain:
a) Imported MS word theory of operations
b) Generated API documentation

Is there a way to make MS word document to be a part of the Doxygen project?
Is there any way (direct or indirect) to import both the Doxygen API documentation and the MS word into the same CHM automatically?
Did anyone face the same or similar problems and what the experience was?

I'm looking for opinions and discussion. We're currently evaluating Continuous Integrations tools, and in particular Hudson and CruiseControl.NET. I have a lot of experience already with CCNet having used it for the past few years at my previous two jobs. However, Hudson seems to have really taken off and is getting great reviews.

We use TFS for our source control and build process, so we need to easily integrate with those.

What are people's thoughts? What CI tool do you use and why? All feedback welcome.

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare

I started with Hudson, and later changed to Jenkins (which is actually a fork of Hudson; my impression is that more development happens on Jenkins nowadays than on Hudson). They are easy to configure, it has a web interface, there is no need to work with the configuration files directly.
I don't use TFS, hence the process is different. Jenkins polls the subversion repository, checks out new sources, build the products, tests them. I do not know how to integrate that with TFS.
By the way, can't TFS do the UnitTests, copy the artifacts to some other place, etc?

My understanding was that Jenkins and Hudson were the same product, I didn't realise one was a fork from the other.

With CruiseControl.NET you manually edit the config files, which isn't difficult as there is loads of documentation on how you need to do this.

At this point in time we're simply evaluating if the CI tools that are currently out there. Although we use TFS for our version control, we're still looking at other options. We're re-evaluating our entire build process, and that includes TFS. Moving over to Github is also being considered. At this point in time nothing is being ruled out.

We want to evaluate and consider all the options so we choose the best process and tools that fit the business going forwards.

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare

One of our major concerns with TFS is that it isn't easy to work with it in a remote deployment where you have remote workers. Some of our development team are co-located and we would prefer to use a version control / build process management system that offers distributed deployment. Hence why we are looking at replacing TFS with Github / Github.

As part of evaluating our toolchain we're also looking into what continuous integration tools fit our requirements. The main contenders are

- CruiseControl.NET
- Jenkins
- TeamCity

Each has its merits. I have personal experience of CruiseControl.NET, whilst one of the other devs has used Jenkins. TeamCity is from the same company that makes Resharper and looks very professional with some excellent features.

Just looking for suggestions, advice etc to help make a decision.

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." - C.A.R. Hoare

Recently I loaded up some team members with a large amount of Templated Tasks. There is not really enough time to back fill details in the Tasks from the PBI nor do I think someone needs to do this as that Task is linked to the PBI.

However, some team members use the "My Tasks" query. Due to my loading of template tasks their query results in a bunch of tasks that have little detail unless they dive into the PBI.

To correct this issue temporarily I have provided a query using the backlog board with Tasks being visible and then added a clause to the Linked Items (the tree view) to only show items assigned to them. This works.

However, the query also returns all of the PBIs as I have not adjusted the top part of the query. Is there a way to not show PBIs that the user does not have a Task for? What clauses would I use?

Computers have been intelligent for a long time now. It just so happens that the program writers are about as effective as a room full of monkeys trying to crank out a copy of Hamlet.

The interesting thing about software is it can not reproduce, until it can.

You can change any TFS query using the Query Editor, and add further details like the parent PBI, which should give more information, or assigned to etc.

When you click on MyTasks query, Default is "Results", which the result of your query, and the below window shows details of the work items. To edit the query, navigate to the "Editor" link. Here, you can change the Type of Query to use Tree of work items, which gives you nested results. OR work items & direct links, which helps you present all the links to your work item.

Do you just add a file with this code and share it amongst your projects?
Do you have a snippet type program and just copy/paste into/out of that?
Or do you build up a separate library and reference that?
If you do the latter how much do you put in, i.e. do you think there's an issue with that library containing much more code than you need?

By definition common code can not be snippet - snippet is for repeated code structures not for repeated code!
IMHO the best way is a separate library. I'm also do such utility methods stateless an static.
I'm also think that there is no such a thing too large library - it only a matter of code organization. There is nothing wrong with a single library with thousands of methods (if the project requests it) as long as the methods are organized...

I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)

You may consider an other addition to that...
I'm creating my libraries in a totally different solution (no as part of my main development solution) than create a local nuget package from the outcome and add it as reference - that handles me all the necessary dependencies an updates every time I update the util libraries...

I'm not questioning your powers of observation; I'm merely remarking upon the paradox of asking a masked man who he is. (V)

Similar to Eliyahu, I prefer a Utility library for such code. I'd write one file for each type which gets extended by an extension method (my way of code organisation). Similarly, other utility methods get grouped into files by their underlying business object.

First, one be sure that the shared code really should be shared. The fact that it seems like it is common functionality doesn't necessarily mean that it will remain that way. (Really a bad idea to start adding conditional logic to control different logic flows due to different applications.)

Second, how is the rest of the business applications structured. Primary if you have two applications X and Y that you want to use your common code M, do X and Y have their own delivery schedules or are they always delivered together. If they have their own deliver schedules then a common library MUST have its own delivery schedule as well. That is the only way to insure that X is using the version of M that it was developed with and Y is doing the same.

Third if different deliver schedules are needed then one must deal with different versioned apps, and if one must deal with git as the source control system then one has a problem since git only deals with that via different repositories.

There are additional issues depending on what language is being used and how applications are delivered.

I have a separate project in my solution that contains such code i.e. code that is shared across other parts of the application. This can then be distributed as a self contained assembly. If using .NET then you have the option of placing this assembly in the GAC where other applications can also use the same functionality.

We have probably all developed those Visual Studio applications which rely on additional DLLs and static libraries containing our favorite snippets of code. My question is how best to manage this kind of project which is not all neatly contained in the project folder. If we add our library paths (elsewhere on the disk) to the project then we can be sure that we are linking with the latest versions of the libraries but archiving this arrangement is a nightmare. Another alternative might be to copy all the libs to a 'lib' folder within the application that we are developing. This would provide a self-contained project which could easily be archived but we may not be using the latest version of our libs.

I would be very interested to hear what other developers do. Is there a better way!

However you will in fact be using the same one in development, QA and production. And be more likely to label it in source control so you can keep track of it.

Thus if a problem occurs in QA or production then you are more likely to be able to reproduce it. And when you do update to new versions as part of a development and business decision then development and QA can more fully verity that it continues to work as expected.

Application life cycle includes various steps involved to have the system ready for use. For example, waterfall method is one famous software life cycle which includes
Requirement analysis
System Design
Implementation
Testing application
Deployment
Maintenance