We are getting push back on how to track changes through our new tool and the CMDB currently.

The problem is that people want to know exactly what has changed at the code level.

So for example, if we have a Software CI defined as XYZ version 1.0 and they need to make a change to it. We would link the XYZ V1.0 to the RFC as the CI changing. But they also want to know the actual code module that they/developers are updating/changing so they no exactly where the change was made.

At this point we don't go to the actual code level, but I can see why they would want this. They want to go into the RFC and be able to easily report and see which files were updated.

Does this make sense to put into the CMDB as say sub components of the Software CI? This could become huge though as 100s if not thousands of code files could make up an application.

this is the crux of many cmdb implementations. Many folks in an organization will want as much detail as possible because they simply want the information however diving to deep to soon before you are there from a process and organizational maturity standpoint will essentially be a setup for failure. For your situation, the short answer is Yes eventually you could get to that point with the level of detail in your cmdb, but the question to ask do you need to? As with ITIL implementations, you need to ask your self "What is the problem that you are trying to solve?". Is there a problem within your organization that requires you to track down to the code level? If the answer is no, then at this point i'd highly recommend not adding additional levels until you are ready organizationally. Now an interim solution could be adding that detail into your RFC's when a request is filed. You can still track against that particular software but will just need to look into the details of the request to get down to the code level of detail.

Let me give a prior life example: I handle change requests for our HR system which handle roughly 125k employees (so not a small one). At the initial onset of our cmdb and change implementation, we had a CI for the highest level of this application, with the intent that eventually we would go down into more detail when we knew what we wanted to track against. In the interim until then we utilized open text fields with standard entries to specify the area of the application it was going to. 6 months or so after implementation, we added 7 additional CI's which represented the modules of this application, which rolled up into the higher level CI of the application itself. This was only possibly once we truely understood what and how we wanted to track changes as we started to get more experience._________________Adam
Practitioner - Release and Control
Blue Badge

"Not every change is an improvement, but every improvement requires a change"

We've actually built this feature in for other firms. It's expensive and complicated to do. You will require:

- An enterprise DSL
- A standard and distributed LSL design (something ITIL neglects)
- A solid Source Code Control System with an appropriate API
- A CMDB with an appropriate API
- An automation framework, such as Lucent's Nmake
- A Software Build Framework, that has an appropriate API
- A Software Deployment Framework that has an appropriate API
- An RFC Promotion/Rejection Framework that has an appropriate API (it should also track dependencies between RFCs)

All of the above will need to be neatly integrated to each other, such that any action in any area of the system will appropriately and transactionally populate all other systems.

When your Change Management team asks for details of code modules, you will have to get them to clearly explain why they need such a level of control. Getting it is very expensive and having it has limited use for people that don't understand code. I can understand why "developers" would want such features but I can't for the life of me understand what value the Change Management team would add by having such information available to them.

Also, based on your description of how you link things together, you must not have a big enterprise, because if you did, manually linking everything together wouldn't be an option. It just won't scale. All linking should be done through standardized automation frameworks and templates.

BTW, the linkage is such that:

- A Product has one or more Releases
- A Release has one or more Changes
- A Change is linked to one or more specific versions of one or more Source Files

You will need to address things like Dependencies Between Changes, Dependencies between Source File Versions, and Circular Dependencies in both cases, to get your solution to work appropriately. This is not easy, unless you've done it a few times.

Help update our CMDB and keep it current as best as possible
Help verify authorized changes were completed
Help detect un-authorized changes

So, when I have a custom application called XYZ that we develop in house that consists of many code files that get compiled into what ever binary form, I would like to have the autodisovery tool to be smart enough to monitor those binaries, executables, that make up application XYZ and notify us if any of those files changes (got updated) and this could be mapped back to the RFC to verify that these files were expected to change, not just the higher level CI.

See what I mean or am I off my rocker?

I can see us tying into the DSL through API from our Change and Config too and map the components to the CIs they select to link to the RFC they create which bring up a list of the lower level components based on the Software CI they choose. But I like to dream.

I just wondering then how can you make the development team happy to track their work and be able to know which files they are checking in and out from their code repository and promoting up for builds and deployments.

An Autodiscovery Tool (ADT) is great for finding out what you don't know about your "physical technical" assets. However, there are some things you should keep in mind.

1. An ADT tool should not be how you keep your CMDB up to date, for many reasons. One, is because it cannot detect "most" of the things you will need to track and manage, within the CMDB. For your CMDB to be truly useful, you will need to track and manage many different things that an ADT tool can't help you with (Organizations, People, Products, Releases, Changes, Incidents, Problems, Documents, Services, Etc., not to mention the many permutation of Assets that can't be detected by an ADT, such as Vehicles, Furniture, Software Source Code, etc.).

2. I recommend that you register new data in your CMDB, long before it gets deployed to its target environment. For example, you should be registering a Server in the CMDB, at inception, which is before it is built and detectable by an ADT. If you are not registering your CIs, at "inception" (or at least at "creation"), you should re-evaluate your processes, as they sound flawed.

3. I also recommend that you track the modification of data, at the time of modification, right in the CMDB. You should not be trying to catch changes "after" they get deployed. If you do this, there is a flaw in your process. However, using an ADT to help you detect unsanctioned changes is a fair purpose for its use.

4. Once you get your data into your CMDB, I recommend you use it as the "reference" for all things that use a "list" or inventory to seed their activity, such as your infrastructure monitoring solutions or your automated software build frameworks, etc.

An ADT tool is a great thing to have but it only focuses on a very small piece of the bigger puzzle. If you want your CMDB to be useful, you will need "business data and networks" stored within it, not "infrastructure data and networks". And, by "networks", I mean "connectivity", in the sense of Social Networks and Corporate Networks. If you focus on nothing but infrastructure, you will find that you will spend a fortune in time, money, and energy to maintain your CMDB and you will get very little out of it.

The main purpose for the ADT integrated with the CMDB is to insure we do have all assets discovered properly through initial discovery, as this task is to big to do manually and definitely would be prone to humar error.

Once the initial population is established, then the ADT can keep the assets up todate as best it can, more simple things like hardware specs (CPU, RAM. etc.) and as well software installed on these servers (more COTS than anything)

Yes I agree it can't help with everything, but it is definitely a big help and will provide a us with more confidence that our CMDB is a trusted source of information.

It will help us validate our process as well to ensure that changes are going through change management and allow us to make improvements or communicate better as to where our process is failing or whhy people bypassed it.

I agree with registering new CIs at inception.

Not sure on your comment on tracking the modification of data at the time of modification, how do you do this or what do you mean?

Not sure on your comment on tracking the modification of data at the time of modification, how do you do this or what do you mean?

You do this by planning for the modification and registering the modification "before" you execute it.

When it comes to virtual modifications, such as logging into system to change software settings, scripts, etc., you should "never" modify a single thing in any environment without the details of the modification being fully scripted and tested in previous environments. All scripts should "phone home" to tell you what they're doing or have done, where they're doing it or have done it, etc.

When it comes to "physical" modifications, where you manually perform things like taking out circuit boards and replacing them, such changes should be registered in the CMDB before they're executed or, at very least, at the time that they're executed. And, the work should not be signed off until the CMDB reflects that the change was applied appropriately.

In either case, the ADT tool should simply be used as an audit solution to ensure that no one has modified anything "since the last known modification".

If you're doing your work correctly, an ADT tool should only tell you what you already know and pick up the "rare" issue, which means it acts as an expensive insurance policy. If you're not doing your work correctly and your ADT tool is regularly picking up changes and telling you things that you don't already know, you have much bigger issues than the deployment of an ADT tool. You have issues with process and execution of work.