We are looking at implementing a CMDB and a discovery system that will automatically define the relationships between CIs once the model that have to be represented has been designed. We are looking at both agent based and agent less solutions however I still cannot figure out which one would better fit in our environment. We have few thousand of different systems (unix, windows, mainframe, virtual systems) running few hundreds of different applications through out the UK so there is a level of complexity to manage and our monitoring team is highly skilled in supporting agent based systems however I am not sure an agent based system is the best for a configuration management. What are your suggestions?

first things first.... it depends..... that was the standard answer
.............sigh.....................

Implementing the Configuration management process, creating and maintaining the CMDB and the installing / configuring / maintainance are 3 separate but related process/projects / pieces of work

First, create a Config mgmt process to deal with the CI - define, etc

second draft what info you want in the cmdb. start small then grow

As for the installation of the auto discovery tool.

IMNSHGO, I find that the Auto discovery tools or automated discovery agents to control things usually do more harm than good.

Any SNMP based monitoring tool that you are currently usign to monitor tyour environment should be able to give you the basci information - IP address, host names.

The support teams should give the rest - network, system, applicaztion - once you decide you want more info

The issue i have with A.D. tools is that they are purchased to replace the onerous task of checking the systems manually 'supposedly saving ££$' but more cost is incurred because the configuration of the A.D tool is not the best

I have witnessed an A.D. that was installed and misconfigured. When it came time to autodiscover the network. It did. The entire network. Any device that was on the network. Oh the network was the entire Internet.
It tooks 40+ hours to start, then kept refreshing and refreshing.

There was no delimiters set to only deal in specific IP ranges etc.

It took days / weeks to trouble shoot / re-configure the A.D. tool. the time / money could be spent on other things

Agent installed on your environment machines is another story. What is the purpose / impact to the systems_________________John Hardesty
ITSM Manager's Certificate (Red Badge)

Thinking that AD tools will give you config mgmt across multiple platforms without duplication and other errors is not realistic. It may refresh a list of technical hardware and software regularly but you'll never where anything is, or what it does, or who owns it, or what is planned for it, or how it can be recovered. If you want a real problem, how will AD work with virtualised systems? across firewalls?

And of course how do you validate the AD systems are correct... one error and they can't be trusted as a definitive source.

Scoping to underpin improved working practices is the path to success, not gathering more data. If AD is contributing to a controlled source then its good, if it is the source then its not so good.

Agented will return more information
Agented means having an application sitting on each device
Agented may require admin passwords (= security/stability issues)
Agented may have a heavier network footprint - it's usually 'push' technology which means it will wait til it's connected to the LAN before trying to send info.
Agents may not be compatible with all o/s, systems, security etc.
Agentless is usually 'pull' technology and therefore may just have to probe and if laptops aren't there, they may return a false zero -

Either way, do not underestimate the work involved in defining your devices, config attributes and data validation.