Frequently Asked Questions

Generally, using "setview -exec" is a deprecated way of doing things. It will not work on NT, and it will not work in snapshot views. Instead, one should use view extended path names.

When writing scripts, it is a good idea to normalize path names to the view root, which can be obtained by running "cleartool pwv -root". Note that this returns an empty string when run in a setview context. Therefore, you can safely append an absolute pathname beginning wiht the vob tag to the value returned by "cleartool pwv -root".

Make sure no process has its working directory set to a path beginning with the vob's vob tag. One can use fuser(1) to determine which processes are using that file system.

More specifically, go to /view (view root of the ClearCase Unix Server) and type 'fuser -cu'. If there are entries, try 'fuser -ck' (as root).

It has been observed that this isn't enough in some cases. In extreme cases, only a reboot will clear out the locks. A classic one is when /data (where you find 'clearcase/vobstore') is a link to a remote file system. If that file system is removed (typically, during DRP - Disaster Recovery Plan -) whereas ClearCase server is still up... you can only reboot (the mvfs module will deny any attempt to unmount itself: device busy)

First, start by monitoring your license usage over the day. There are some nice packages that allow you to do this, For example Ed Finch's ClGraph.

The next thing is to reduce the license timeout to the minimum allowed: 30 minutes. Do this by adding a line saying: -timeout 30 into your /var/adm/atria/license.db file. You may also want to give a list of priority users by adding lines saying: -user userid.

If you still run out of licenses (and you can't afford to buy more), you can ask people to say clearlicense -release, but keep in mind you can only do this a limited number of times per day (approx. twice the number of licenses).

Touching the /var/adm/atria/license.db file releases all licenses at once, but again this can only be done 12 times per day. I use a crontab script similar to the one below, touching the file at a frequency that roughly matches my usage pattern:

5 9,10,11,1,2,3,4,5,6 * * * /bin/touch /var/adm/atria/license.db

Yet another way to reduce license usage is to encourage developers to use snapshot views and ensure that your change process knows how to deal with snapshot views, as there are a couple of pitfalls. Developers usually need little encouragement, as they will gladly sacrifice dynamic views for faster build performance, especially on NT systems.

The checkin is mainly for cosmetic reasons and to avoid confusion if this procedure is used in a snapshot view. In the end, the element isn't removed but just relocated into the trash bin. The ClearCase administrator will empty out the trash once in a while.

This procedure will not work as described if you don't notice the error immediatly and check in the containing directory. If you later check out the directory and remove the element, you will notice that it doesn't get relocated into lost+found, since the previous version of the directory still has a reference to that element. If you then attempt to create a directory element with the same name, the evil twin trigger will get in your way. You can fool the trigger, though, by first creating a directory element with a different name and then renaming it using the ct mv command.

The important thing to remember is that renames, additions and removals of elements from a directory are all harmless and recoverable, so don't panic!

It helps to visualize additions, removals and renames as editing the containing directory. It's very much like adding, removing or changing lines in a file, and the recovery procedures are similar.

The easiest thing to do is to simply cancel the checkout of the directory. This will restore the previous version of the directory, and the removed elements will reappear, as if by magic:

% ct co -nc .

Checked out "." from version "/main/234".

% ct rm file<=== OOOPS!!

Removed "file".

% ct unco .

Checkout cancelled for ".".

% ls file

file

%

If you don't notice the error immediately and have already checked in your directory, you can take advantage of the evil twin trigger and allow it to help you. Simply attempt to re-mkelem the deleted element and follow the instructions:

% ct co -nc .

Checked out "." from version "/main/234".

% ct mkelem file

ERROR:An element named "file" already exists in

in some other version of ".":

Instead of creating a new element, you probably want to

create a hard link to the existing element, like so:

cleartool ln .@@/main/LATEST/file .

% ct ln .@@/main/LATEST/file .

Link created: "./file".

%

In general, directories don't contain elements but contain named links to elements. Removing an element doesn't destroy the element, it just removes the link. It is therefore almost trivial to resurrect a removed element.

One consequence is that removing directories and removing files are really the same operation, and the recovery procedure is identical. In particular, if you accidentally removed a whole tree, you only need to recreate the link to the topmost directory of that tree.

A more precise formulation would be: "How do I locate elements that are linked into two or more different directory elements?". There is no really good method to do this except by exhaustive search, for example by using this technique:

This is probably the single most often asked question. The simple answer is that ClearCase doesn't support RCS keyword expansion, mainly for one reason: it would break merges.

Since two different versions would always have a different RCS keyword expansion at the same location within the file, any merge between those two versions would invariably cause a conflict that couldn't be resolved automatically.

In general, "inband info" (i.e. storing information about an object within an object) is a bad idea, since care must be taken not to confuse any random string with the actual metadata. A vivid demonstration of the danger can be found when attempting to check into RCS files that explain how RCS keywords work.

Instead of storing metadata within the data, one should take advantage of ClearCase's extensive metadata types (attributes, hyperlinks, comments etc...).

The obvious follow-up question then is: "How do I access the metadata when I can't connect to ClearCase?" There are really two cases here:

·Version info on the production system;

·Version info in an offline development source tree, for example a disconnected snapshot view.

The correct solution to the first case is to insert the RCS keywords at build or packaging time. There are tools to insert keywords into binaries, for example the contributed package T0039 (ccwhat).

The second case is the best argument for implementing RCS keywords, and if it's truely important to your environment, read on...

There are ways to make RCS keywords work within ClearCase, but they are by no means trivial. The best method so far is to use the contributed package T0027, which contains both a trigger and a type manager. The trigger does the actual work of substituting RCS keywords, and the type manager avoids the merge conflicts by interposing itself between the file and the real type manager, removing all RCS keywords.

Note that even with this solution, your branching strategy should be set up to deal with the case described in the diagram on the right. Assume that the blue developer delivers a change (1) of some file into the red delivery branch. He then never touches that file again. The green developer then makes another change and delivers it (2). The blue developer, having done changes in some other files wants to sync up and executes a findmerge from the red branch, which will cause green's change to be merged over as a copy merge (3). Now, the blue developer wants to deliver his other changes, but since the merge in (3) caused the RCS keywords to change, findmerge will think that blue modified the file even though he didn't and cause (4) to happen, which is somewhat bewildering.

The reason this happens is because the findmerge algorithm has a case where the actual file content is compared. Unfortunately, findmerge does not use the type manager's compare function, but something hard coded and will think the files are different even though the only difference is in the keywords.

There are two possible workarounds:

·delete or rename the development branches after delivery. This is a good thing to do in general, since it will keep the version tree looking clean, especially if the development branch name is to be re-used for the next change;

·have the RCS keywords pre-checkin trigger look for a merge hyperlink, test if the merge was a copy merge and if yes, don't modify the keywords.

Most often, this is done in the context of upgrading third party software or doing some other form of mass checkin. Recently, Rational came up with clearfsimport, a tool that will apparently do the right thing. Stay tuned...

The commands above will fail if no changes have been made. You can either insert the -identical flag or re-run the above once more, this time canclling the checkouts with unco -rm.

Most often, this is done in the context of upgrading third party software or doing some other form of mass checkin. Recently, Rational came up with clearfsimport, a tool that will apparently do the right thing. Stay tuned...

An evil twin is two links with the same name, each pointing to a different element, in two different versions of the same directory element.

The reason they are evil is because they create the appearance of a directory containing the same file on two different branches, when in fact they contain two different files with the same name. If this situation isn't detected, then one can very well end up with two change histories of something that everybody thought of being the same file, only to get a rude surprise at merge time. There is essentially no good way to merge the two change histories.

Evil twins get created when two developers concurrently add the same file to source control without coordination, or if one developer copies files from another developer's view and puts those files under source control.

Copying files around should be discouraged, but the best way to avoid evil twins is to create a pre-mkelem trigger that will search through some or all versions of the directory containing the new element, verifying that no link with the same name exists.

The following perl code implements this trigger and prints out a warning together with the suggested link command that will link the existing element into the version of the directory where the newly created element would have ended uup.

The trigger will also do a sanity check on the file name itself, catching most unintentional element creation attempts.

The trigger relies on the excellent ClearCase::ClearPrompt and ClearCase::Argv modules, available at a CPAN site near you.

I consider it a nuisance. One common development strategy is for every developer to create their own development branch by using a view with a config spec similar to this one:

element * CHECKEDOUT

element * .../mybranch/LATEST

element * BASELINE -mkbranch mybranch

element * /main/0 -mkbranch mybranch

Note that whenever a developer starts working on some file he hasn't touched, a branch gets created at that point. As work progresses, more and more elements will have a "mybranch" branch.

Once in a while, the BASELINE label gets moved to the newest approved base line, and developers are encouraged to rebase via a merge. The merge will do something for every element that has a "mybranch" branch and for which the version that BASELINE selects changed since the branch was made.

If one leaves zero versions on the "mybranch" branch behind, then this will cause copy merges, which won't affect the logical concistency of your view of the source tree, but will affect performance. Besides, zero versions just look sloppy.

Most sites add a post-rmbranch and post-unco trigger to remove the dangeling branch with only a zero version. The following code implements this trigger. Paul D. Smith wrote this trigger, and I added some code to deal with snapshot view weirdness:

#!/bin/perl

#--------------------------------------------------------------------

# Nothing special needed Perl-wise - allow Rational Perl to be used

#--------------------------------------------------------------------

require 5.001;

#--------------------------------------------------------------------

# Debugging aid. Overloading the semantics of the standard

# CLEARCASE_TRACE_TRIGGERS EV: if it's set to -2 we dump

# the runtime environment into a file in the current dir.

#--------------------------------------------------------------------

if (int($ENV{CLEARCASE_TRACE_TRIGGERS}) <>

$ENV{CLEARCASE_TRACE_TRIGGERS} & 0x2) {

open (EV, ">rm_empty_branch.txt");

print EV "$x=$y\n" while (($x,$y) = each %ENV);

close EV;

}

#--------------------------------------------------------------------

# See if the user wants to suppress this trigger's actions:

#--------------------------------------------------------------------

exit 0 if $ENV{CCASE_NO_RM_EMPTY_BRANCH};

#--------------------------------------------------------------------

# Use safest quoting possible - java creates files with $ in

# them, so I can't just universally use "

#--------------------------------------------------------------------

$q = ($ENV{OS} eq "Windows_NT" ? '"' : "'");

# Remove empty branches: if a branch has no elements (except 0 of

# course) after an uncheckout or rmver, or the parent of a

# just-rmbranched branch is now empty, remove it.

#

# If the branch in question is /main, don't do anything (another option

# would be to rmelem the entire element, but that seems like a very bad

Consider simply cancelling the checkout, but be aware that if the checkout was the result of a merge, cancelling the checkout will result in the merge arrow disappearing, forcing you to redo the merge next time. In this case, using the -identical flag of the checkin command may be preferable.

The reason why you're not seeing the file is that the version of the directory selected by your view doesn't have a link to that file. Therefore the solution is to figure out which version of that directory element has the link.

The easiest way to access such a file is to use a different view, namely, one that selects the appropriate version of the directory. Even if you are set to some view that doesn't, you can use the so-called view extended path to refer to the other view. On UNIX, this is done be prepending /view/viewtag to the full pathname.

In some situations, it may not be convenient or safe to use view extended paths. One problem is that while you are using a view extended path, someone else may be changing the config spec of that view. For logging purposes in particular, you are better off storing the object id of the element and letting ClearCase tell you a patnname that is valid in your view. In other words, first take a view that selects your invisible file and run this:

% cleartool setview otherview

% cleartool describe -fmt '%On\n' path/to/invisible/file@@

03fcf938.39c011d5.b891.00:01:80:ab:ed:ac

Note that the final @@ at the end of the path tells ClearCase that you are interested in the element's object id. Omitting the final @@ would give you the version's object id, whih may or may not be what you need. Now to retrieve a pathname valid in your view, do this:

Note how we enter version extended name space (or history mode on NT) very early in the path. This is natural, since our premise is that the file isn't visible in our current view. These paths can become quite long, easily exploding NT's stupid limit on path name length or command line length if you every try to use that pathname for some command.

Note that this technique is very useful for tracking down relocations. If in the example above the file was relocated to some unknown but visibile location, Clearcase would have shown that location instead. ClearCase is amazingly smart in figuring out the shortest possible pathname to a specific element.

In order to set up a merge, findmerge needs to find the common ancestor of two versions. In complicated version trees, there can be many common ancestors, and finding the best one requires a non-trivial graph traversal algorithm.

The main problem is that while findmerge is looking for a common ancestor, it is holding a lock on the vob DB, preventing others from writing to the DB. In order to prevent an inordinately long amount of time in the lock, findmerge will interrupt its search, release and reacquire the lock and restart it at ever increasing intervals. There are two environment variables that can be used to tweak this behaviour, which may improve performance for specific elements:

CLEARCASE_FM_TRANS_THRESHOLD(default: 128)

This is the base value for how many versions are examined prior to a restart. This value is doubled, tripled etc as needed after every restart. Increasing this value will avoid a restart in those cases where it was "almost there", but may increase the risk that other developers will experience vob DB timeouts.

CLEARCASE_FM_MAX_LEVEL(default: 65536)

This sets the maximum number of calls that will be made in the quest for the best common ancestor. This value is set very high by default. Setting this to a low value will definitely acceletate findmerge, but may cause it to choose a very old common ancestor, forcing the merge to step through many many old changes.

There are two good techniques to speed up findmerge, often by orders of magnitude:

·Use -fver instead of -ftag. Unless you have a very unusual situation, -fver should be good enough.

·Add -element queries to restrict the number of elements tested. If, for example, you use the common development branch/delivery branch technique, and want to merge the latest from the delivery branch into your development branch, you really only need to consider those elements that have a development branch. So, using the following command instead of a straight findmerge can result in huge performance improvements:

·% ct findmerge someplace\

·-element 'brtype(dev_branch)'\

·-fver .../del_branch/LATEST -merge

Using -avobs instead of recursive descent can also improve performance, but is a more risky techniques if directory merges are involved. Since you don't control the order of merges when using -avobs, you could conceivably merge a directory element which isn't visible yet because the parent directory element hasn't been merged yet. Using -avobs -visible will avoid error messages, but may cause you to miss some merges entirely.

This is a good interview question... A typical development config spec looks like this:

element * CHECKEDOUT

element * .../mybranch/LATEST

element * BASELINE -mkbranch mybranch

element * /main/0 -mkbranch mybranch

or like this:

element * CHECKEDOUT

element * .../mybranch/LATEST

element * .../deliverybranch/LATEST -time sometime -mkbranch mybranch

element * /main/0 -mkbranch mybranch

They all contain the /main/0 rule at end because otherwise, you couldn't create new elements. When you create a new element, it will only have a /main/0 version. It wouldn't have a "mybranch" branch, nor a BASELINE label (in the first config spec) and wouldn't have a "deliverybranch" either. Therefore, right after the element was borne, it would disappear from your view since no valid version is selected and the -mkbranch operation couldn't take place. Adding the /main/0 rule ensures that you can always see a newly created element.

The typical follow-up interview question then is: "if everybody uses /main/0, how come I can't see somebody else's new element?". This is a consequence of directory versioning. In fact, two conditions must be satisfied if you are to see a specific element in your view:

·Your config spec must select a valid version

·Your config spec must select a version of the containing directory that has a link to that element.

It is easy to see that the second condition is not satisfied for other users who use their own development branches.

Well, why not use /main/LATEST instead of /main/0?

Using /main/LATEST may hide labeling errors. Suppose, for example, that there are elements that are missing the BASELINE label for some reason, and you are using a config spec like the first one above. If you use /main/0, you will get empty files and empty directories for such elements, which have a good chance of at least causing some visible warning or error at build time. If you use /main/LATEST, you may end up using the wrong version and not notice it.

There is yet another case where /main/0 is required, and is only indirectly related to creating new elements: intentionally empty directories or files.

When one creates an empty directory or file and later merges it onto a new branch, the findmerge algorithm will notice that the base contributor (/main/0) and the target contributor (also /main/0) are identical, and since the source contributor is empty it will skip the copy merge, leaving you with nothing. Therefore, you need to select /main/0 even in a config spec where no new elements are created, such as:

element * .../deliverybranch/LATEST -nocheckout

element * /main/0 -nocheckout

Oh, and how come Rational's documentation uses /main/LATEST all over the place? My guess it that it's a holdover from old documents, where more complex config specs were derived from the default config spec. It is a non-obvious step to replace main/LATEST with main/0, and even to this day, many people think that there is some kind of obligation to merge back into main...

You can experiment with setting various access permissions in the view stoirage directory, but this isn't really recommended. Instead, you should figure out why developers are changing the config specs and why this would be a problem.

My prefered method for dealing with this is to encourage the use of view maintenance wrapper script that include config spec generators. These scripts should fit in flawlessly with your process and simplify it. Your measure of success will be whether the developers prefer the scripts to hacking their own config specs.

You can't and shouldn't, at least for those people who need access to the code. Obviously, you need to keep unauthorized people away, but this can be accomplished by properly setting access permissions to vobs. In other words, you can't prevent people from creating views but you can prevent them from seeing anything with them.

Views really are light weight and ephermal. Creating views shouldn't be a big deal. Your change process should encourage people to check in their work often and essentially treat views as temporary storage areas.

This can happen if you a) check out a version in the version tree or b) if a mkbranch rule has been added to your config spec which creates a new branch that is not selected in your view.

Under Windows, open the version tree and look for the icon that resembles an "eye", it indicates which version is currently selected in your view. Now look for the checked out version (a circle with no version number in it), in most cases the branch is different! Examine your config spec to find out what has happened.

In Unix enter cleartool ls {element name} to find out which rule ClearCase uses to select the version, enter cleartool lsco {element name} to see which version has been checked out.

The "linked storage area" is the physical storage location of view private files. This storage location can be on external devices, for example filers, whereas the view database storage needs to be on a local disk.

Recent versions of ClearCase have relaxed this condition somehwat, and if you're using filer hardware recommended by Rational, you can put the whole view storage directory on the filer. In other words, linked storage areas are somewhat obsolescent and designed to work around a problem that is disappearing.

You must lock vobs to keep the database consistent with the storage containers where all the data (element versions) is stored. Write operations which occur during the backup break the consistency, the keys in the database are not fully correct and you may have trouble accessing certain elements. You can use cleartool checkvob -pool -source to check for errors.

Remember, inconsistencies in the database are hard to notice without the cleartool checkvob command. You might think that everything is okay but months later you will run into problems.

As you know, metadata types like branches, labels, attributes etc. are stored per vob. Before you can create instances of them (e.g. mklabel) you have to create the object itself (e.g. mklbtype). If you have a project where different vobs belong together you usually want to make the types available in all vobs but of course you don't want to create them n times for n vobs. Admin vobs are made for this situation, you only create the types once in your admin vob and they are available in all other vobs which are linked to the admin vob.

In UCM, the project vob becomes automatically the admin vob and stores all necessary information.

In lost+foundClearCase puts all elements which are not referenced anymore. Imagine you remove a directory with rmname or rmelem. ClearCase won't recursively remove the elements in this directory but they cannot be accessed neither. Instead, they will be moved to lost+found.

3 comments:

hey good post..I was really on my nerves about finding the right and tough questions..but the contents are not displayed properly..could u please send me this to maninavandar@in.ibm.com?It will be a great help :)