Sunday, August 29, 2010

---------------------------------------------------------------------------------- Postings in the same series:Part I – The Introduction Part III – To Serve And To ProtectPart IV – Lets Create a Simple Task----------------------------------------------------------------------------------

With this posting the way Tasks work will be explained.

As described before, there are two different kind of Tasks: Console Tasks and Agent Tasks. The way they operate is also different and important to know.

Console TasksThese run locally on the system where the Console is being run from and use functionality and/or UI’s which aren’t typically SCOM based like the SQL Management Studio UI for instance. Of course, in order for this to work the required applications/features need to be present on the system. Otherwise these Task will not run. Also the output created by these very same Tasks aren’t piped back into SCOM.

So Console Tasks extend the SCOM interface in such a manner that the SCOM Console becomes a jumping board to other UI’s or functionality which aren’t typically SCOM based.

Another thing to reckon with are the way authorizations are being handled. As stated before the SCOM Console launches another UI and passes on the credentials which were used to start the SCOM Console. Depending on what UI is started, the authorizations set for the account used for launching the SCOM Console and the way security within the other application has been set and configured, additional logon might be required.

Huh? What am I talking about? Let show an example in order to clarify it. Lets say I started the SCOM Console with an account which has no permissions in the SQL environment (systemcenter\test). I am in the Database Engine View of the SQL MP in the Monitoring Pane of the SCOM R2 Console:

and select a server where the SQL Engine has been detected on by the SQL MP. In the Action Pane under SQL DB Engine Tasks part the Console Task SQL Management Studio is being displayed:

When I click this link SQL Management Studio is started but this message is displayed:

So in order to have this UI connect to a certain SQL DB Engine, I need other authorizations since the test account will not do.

Agent TasksWhere as Console Tasks launch UI’s or functionality which reside outside the SCOM Console and as such the output created afterwards isn’t piped back into SCOM, Agent Tasks launch processes/scripts defined in SCOM (the MPs as such), which output is piped back into SCOM. The strength here is that everything is kept within a single UI, the SCOM Console. In order for these Tasks to run, credentials are required. By default the credentials used by the SCOM Agent are passed on to that Task. However, one can run Agent Tasks with other credentials as well.

But how does it work exactly? What kind of processes are spawned and where? Let’s take a deeper look into how it works.

For starters, the Health Service process plays a crucial role here (for more detailed information about that process, read this posting of mine). In order to illustrate it, lets run an Agent Task and go through the nuts and bolts as it happens. In this example I run an Agent Task against a test server of mine, the SV02.

I am in the SCOM Console, the Monitoring Pane in the Windows Computer part:

I select the server (SV02) and check the Action Pane. Under the header Windows Computer Tasks there are multiple Tasks available. Among them the Agent Task, Display Local Users.

When I click this link the Run Task screen is displayed:

I have highlighted the Task Credentials area since this part plays a very important role in the Agents Tasks. The first option ‘Use the predefined Run As Account’ is always selected by default. Even though it seems self explanatory enough, some extra explanation is needed here because I have noticed that there is some confusion about it.

Why? Many times people tend to think that the Local System account is being used here. But that isn’t the case however. Lets take a few steps back and look at how the SCOM Agent operates.

Normally the SCOM Agent runs under the Local System account. When I say SCOM Agent, I actually mean the related Health Service, which process name is HealthService.exe. Taken from my earlier mentioned blog posting:

Typically – you will see a couple MonitoringHost processes executing under the Default Agent Action Account. In addition, the HealthService will launch MonitoringHost processes under any preconfigured Run-As accounts that are executing workflows on the agents, using those credentials. Thus ‘giving’ the HealthService the credential management capability to support the execution of modules running as different users.

So by default, the credentials defined in the Run As Profile ‘Default Action Account’ will be used to run the Agent Task when the default option ‘Use the predefined Run As Account’ is chosen and not the Local System account.

However, certain MPs require additional authorizations in order to function (also depending on how tight the security is set in your environment of course). For instance the SQL MP. When this MP is imported, three additional Run As Profiles are added to the list of available Run As Profiles: ‘SQL Server Default Action Account’, ‘SQL Server Discovery Account’ and ‘SQL Server Monitoring Account’.

In this case, when these Profiles do have Run As Accounts configured, an Agent Task based on the SQL MP will use the Run As Account defined in the first Run As Profile, ‘SQL Server Default Action Account’. When this Run As Profile doesn’t have a Run As Account configured, the account defined in the Run As Profile ‘Default Action Account’ will be used instead.

So depending on which MP the Agent Task comes from, the Default Action Account will be used or the Run As Account as defined in the related Run As Profile.

But as you know, you might even choose another set of credentials as well. For this select the option Other in the Run Task screen and type in the required User name, Password and select the Domain where the account resides.

When you hit the Run button, a flow of processes starts. The SCOM Agent is being notified to run a certain Task as defined within the related MP. In order to do this it will spawn an additional MonitoringHost.exe process, using the credentials as selected in the Run Task screen. In this example I have entered the credentials for the Test account in order to make it more visible:

When I check the running MonitoringHost.exe processes on the targeted server BEFORE hitting the RUN button, this is what I see:

Now I hit the RUN button and check the running processes again. Now an additional MonitoringHost.exe process is spawn and as you can see, it runs under the credentials of the test account:

This process runs only a couple of seconds. When the Task is finished the process will be automatically ended. The Task Output is collected and piped back to SCOM:

When an Agent Task is running the Run Task screen can be closed any time. It will not interrupt the Running Task however. Its results are to be found back in the Task Status part of the SCOM Console:

The Details Pane will display the details of the selected Task:

The next posting in this series will be about how to scope the Tasks to the correct group of SCOM Operators.

Got this from the blogs of Kevin Holman and Jimmy Harper. The Event log rules in the Microsoft.Windows.Server.AD.2008.Monitoring.mp don’t work as designed. As a result these rules don’t generate Alerts.

For now an ‘add-on’ MP has been created by Jimmy Harper which disables these rules and replace them with the fixed versions. This MP needs to be imported in your SCOM environment, alongside the original MP.

Friday, August 27, 2010

Bumped into this issue at a customers site. The Exchange Test CAS Connectivity user account got locked out all the time which generated many Alerts in the SCOM R2 Console.

However, since the monitored Ex2010 environment was still under construction, we ignored it. But as soon as Exchange 2010 was about to go life, we had to take a deeper dive and solve it. Gladly the team responsible for the Ex2010 implementation found a KB article which described the issue we were experiencing.

As it turned out, ASP.net impersonation on the RPC and RPC had to be enabled. Want to know more? Read KB2022687.

Thursday, August 26, 2010

As we all know, scheduling Rules to run on specified times can be done but is not an easy one. In order to achieve it, some real XML editing has to be done. But what if you are a system engineer and have many more tasks at hand besides maintaining SCOM? Life is complicated as it is, so many people stay away from it.

Cameron Fuller has posted an excellent article, about how to achieve this without taking a deep dive into XML. Of course, one has to adjust some XML code but is just as ‘complex’ as checking the engine oil of your car, whereas the original approach is just as challenging as building your own car :).

or:

Thanks Cameron for sharing! Much appreciated! Posting to be found here.

Yesterday evening I got a mail message from Silect. They have a free Whitepaper available for download. It is all about planning your SCOM deployment along with some best practices about MP deployment and life cycle.

Even though Silect offers tools for MP Life Cycle Management (among other things) it is still a valuable Whitepaper when you don’t use their tooling. Many companies forget about it while MPs make or break any SCOM environment.

Tuesday, August 24, 2010

---------------------------------------------------------------------------------- Postings in the same series:Part II – How It Works Part III – To Serve And To ProtectPart IV – Lets Create a Simple Task----------------------------------------------------------------------------------

Tasks are a feature in SCOM which are a bit underestimated. Many times organizations do not utilize it to the fullest extend. Sometimes they forget to look into the Actions Pane under the header “… Tasks”. Or they are a bit frightened because some Tasks are not to be taken lightly and can cause some serious issues when these are run by persons who do not understand fully what they are doing.

This series of postings will be about Tasks, where they are to be found, why they are present in SCOM, where they come from, what differences there are in Tasks and how to use them. Some Tips and Tricks will be shared as well. Also an approach will be described where people only get to see the Tasks which are directly related to their field of work and responsibilities. And, on top of it all, a simple but handy Task will be authored which enables you run any Alert shown in the Console against Google as a query. So lets start!

Q01: Where are the Tasks to be found?Hmm, anywhere. Or almost anywhere. Always to be found in the Actions Pane, which resides on the left side of the SCOM Console.

A nice feature about the Task View is that it adepts itself. That is why I wrote “… Tasks” in the introduction of this posting. These three dots are there for a purpose. Depending on where you are in the Monitoring Pane of SCOM, the Tasks header adjust itself accordingly. So the Tasks are always relevant. No Task to stop a SQL Server service while you are viewing a server in a DNS folder of the Monitoring Pane will be shown. This enables one to scope the Tasks to the people who know what they are doing. (More about that in a later posting.) Some existing Task Views are: Windows Computer Tasks, Alert Tasks and SQL DB Engine Tasks.

As the header name suggests, all these Tasks are directly related to that topic:

Q02: Nice! But why are they present in SCOM?Good question! Never take anything for granted. Always keep asking questions. This way you will learn something. SCOM is not just a product which tells you something is broken and ends there. It will also help you finding out why it broke and refer to KB articles which might be the answer to the issue(s) you are experiencing. And the help doesn’t end there. No!

It offers you also some functionality right from the Console which will help you to start troubleshooting. Like pinging a server, starting a RDP session, opening SQL Management Studio for instance. All these actions are Tasks. So Tasks are here to help you and to use the SCOM Console as a jumping board, enabling you to work faster and keep you targeted as well.

Of course, I know that some Tasks require a bit of attention from Microsoft and that some Alerts do not display all the required information. But… Microsoft listens and takes feedback seriously. You only have to tell them. How? Go to Connect as a described in another blog posting of mine and follow the instructions.

Q03: OK, I see. But where do these Tasks come from?From the MPs you have imported into your SCOM environment. Many MPs do contain Tasks which are directly related to the product/service/application the MP is targeted against. So the DNS MP contains DNS related Tasks where as the SQL MP contains SQL related Tasks and so on. The guide of the related MP will tell you what Tasks are to be found in that MP. So RTFM is the credo here :).

Q04: Are there differences between Tasks?Yes, there are. The main differences are Console Tasks and Agent Tasks. Console Tasks run locally on the computer where the Console runs from AND (VERY IMPORTANT TO KNOW!!!) these tasks do run under the credentials which are used to run the Console…

Agent Tasks run remotely on the Agent or Management Server AND (VERY IMPORTANT TO KNOW!!!) these Tasks use the credentials which the SCOM Agent uses or one can enter other credentials in the Run Task screen:

Huh? How you can see whether a Task is Agent or Console based? The icon will tell you more about it:

This icon tells you it is an Agent Task: and this icon tells you it is a Console Task:

The next posting in this series will be about how Tasks work. So stay tuned!

Friday, August 20, 2010

For some time now a SCOM TechNet Wiki is available on the internet. Even though it does not contains tons of material yet, much good information is to be found. The strength here is that is a single place where many resources, found on the internet, are put together.

This is a nice one. A respected friend of mine from Australia mailed me with a good question. It took me some time to get to the bottom of it and to figure out a good approach.

The Case – The Monitored ClusterSuppose you run a file server based on a Failover Cluster configuration, consisting out of Cluster Node A and Cluster Node B. Cluster Node B is idle and Cluster Node A is the owner of all resources, among them Disks P1 and P2.

This configuration is being monitored by SCOM (R2 ideally). For this the Server OS MP and the Cluster MP (among others) have been imported and configured. Also is the Proxy on the SCOM R2 Agents running on both Cluster Nodes enabled. So far so good. The Cluster is being monitored and performance collection also runs.

The Case – Disaster StrikesCluster Node A runs well from Monday till Wednesday morning but dies on Wednesday afternoon. Cluster Node B kicks in and becomes the new owner of all the resources, among them Disks P1 and P2.

The Case – The Report and the missing dataAfter a week someone runs a Report in order to find out more about the % of disk space being used on disks P1 and P2. The Report is targeted at server level. At a first glance the Report seems to be just fine. But wait! From Monday till the beginning of Wednesday data is neatly shown, but after that the graph drops to zero! Huh?

The QuestionWhat? Where is it gone to? The disks are still in place and available. So why does the graph suddenly drop to zero or better, nothingness?Has the Cluster MP turned sour?

The Explanation – Part 1 First of all, the Cluster MP does not collect any performance metrics at all. This is done by the Server OS MP. The Cluster MP covers many health and configuration aspects of the Cluster itself and Alerts when something is not OK.

Time to move on.

The catch here is that Cluster Node B has become the new owner of the disks. So that server will run the collection rules from the moment (*) it became the owner. So when you run a new Report targeted against that server, the graph will start from Wednesday. (* There is a pitfall to reckon with!)

So you end up with two Graphs? One for Cluster Node A and another for Cluster Node B? Yes, you could…

The graph for Cluster Node A displays normal graph from Monday till Wednesday and after that a flat line. Same goes for the Report when targeted against Cluster Node B: a flat line from Monday till Wednesday and a valid graph from early Thursday till Friday.

What about the pitfall?

Good question! As we all know, monitoring and/or performance collection can only start AFTER the discovery has run and ended successfully. The latter is no issue, but the first one is. Why? Well, the discovery of the Logical Disks runs once every 24 hours:

So in a ‘worst-case’ scenario you miss out on monitoring and performance collection for a maximum of 24 hours! Of course, an override could be used here, targeted against the Group ‘Cluster Roles’ in order to reduce that time. But use it smart here. Discoveries running too many times can cause other issues…

The Explanation – Part II, the Smart Approach When you are running two Node Clusters, above mentioned approach should do. But suppose you are running a plus two Node Cluster? So when a failover occurs, there are multiple possible new owners available. So when a Report is to be created, one must know exactly what Cluster Node was the owner of the Resource the Report is about. And not just that, but also when…

This is not viable at all. It would take way too much time. So another approach is required.

The idea here is that you do not target the Cluster Node, owning the Resource, but the resource itself instead. When you select the disk instead of the Cluster Node, you will find two or more paths, related to this object. Which is logical when a failover has occurred. When referring to the above mentioned example you could see something like this in the Add Group screen for the Performance Report when adding a new Series to a Chart:

Name

Class

Path

P1

Windows Server 200x Logical Disk

FQDN of Cluster Node A

P1

Windows Server 200x Logical Disk

FQDN of Cluster Node B

P2

Windows Server 200x Logical Disk

FQDN of Cluster Node A

P2

Windows Server 200x Logical Disk

FQDN of Cluster Node B

Add one series per path into the same Graph. This way you will get a graph which shows all the collected performance data, across the different Nodes, without having the need to take a deep dive into what Cluster Node owned what Resource and when…

What it is? A Super Flow is a collection of help files, pictures, diagrams, videos and online resources all combined in one file/application. This particular Super Flow introduces SCOM to people who are new to the product and have to work with it in an Operator User Role.

Much of the information in this Super Flow is based upon the SCOM Documentation. The strength of it is that all is to be found in one place without taking a real deep dive. When people still have questions they can go to the Resources tab which shows them where to get additional information.

Wednesday, August 18, 2010

As posted before, there is a great Report available which shows the percentage of free space on all monitored Windows Servers.

Even though it is a great report (Free Space Report), it can time out a lot when targeted against reasonable sized environments. And when it does not, it may run for some time (up to an hour or more). Don’t get me wrong here, I am not downplaying the hard work of some much respected SCOM addicts, but just sharing some experiences.

But lucky me! Since a few weeks I have a new colleague who is really into SCOM. He has done many projects as well and one of his customers used the same report. And there they run into the same issues as I did.

However, this customer has some SQL guru’s who looked at the query and did some magic with it. The results? The Reports are rendered way much faster. For instance, the Free Space Report when targeted against All Windows Computers runs in a matter of two minutes! No more time outs…

Monday, August 16, 2010

As stated before, the new SQL MP (version 6.1.314.36) does not cover SQL 2000 instances. But how to go about it when you have SQL 2000 instances in place which require monitoring?

The good news is that Microsoft will soon release a MP for just that. This MP will be based on the last SQL MP (version 6.0.6648.0) which covered SQL 2000 instances. This MP will depend on the Libraries in the last version of the SQL MP. This separate SQL 2000 MP won’t be developed any further though.

At the moment I have this setup in place at a customer of mine and I must say, it works great. So the SQL 2000 MPs are imported on top of the latest SQL MP which does not cover SQL 2000 anymore. As you can see, the SQL 2000 MP components have been imported alongside the latest SQL Server MP…

Suppose one does not have the previous version of this MP, and has not the luxury to wait until the ‘new’ MP for covering SQL 2000 comes out. As a service I have put these two SQL 2000 MP components (Microsoft.SQLServer.2000.Discovery.mp and Microsoft.SQLServer.2000.Monitoring.mp) on my SkyDrive, to be found here.

Normally I would not do this since Microsoft is the one and only company responsible for offering their MPs. But the SQL 2000 MPs won’t be developed any further AND I do get a lot of questions out of the Community where to find the SQL 2000 MP related components.

Also, above information about the SQL 2000 MP is based on this thread, to be found on the OpsMgr TechNet Forums:

A few days ago the updated Core MP for SCOM SP1 (version 6.0.6709.0) has been released by Microsoft. Actually it is more a re-release of the Core MP version where the ODR MP has been updated to version 6.1.7676.0.

A few days ago the updated Core MP for SCOM R2 (version 6.1.7672.0) has been released by Microsoft. Actually it is more a re-release of the Core MP version where the ODR MP has been updated to version 6.1.7676.0.

In every SCOM environment I have worked with this question always popped up: WHAT servers do run an Agent, WHEN is that Agent installed, by WHOM, and what is its configuration?

There are multiple approaches viable here, like a View or a Report. The one which I found to be most popular however is the Report. This Report is created once, published or saved to a MP, and ready to rock and roll for any other time when needed. One can even schedule the Report – select as output an Excel file – and be send out by mail or put on a file share. Since it is an Excel file one can apply many filters against it in order to drill down into the information.

By default such a Report is not available in SCOM. But with a few mouse clicks – all done from the SCOM Console itself, so no rocket science is required – such a Report is quickly created. This posting will describe how to go about it.

One thing I need to mention: this posting is based on SCOM R2. It should work in SCOM SP1 CU#1 as well though.

First we need to create a Group. This Group is dynamically populated and has some excluded members as well which are all the SCOM R2 Management Servers. Not the Gateway Servers though since these are nothing more but Super SCOM Agents. This Group will contain the Class Health Service.

When this Group is created we check its members and wait for about ten minutes (max). This way the newly created Group has a change to ‘get in to the system’, so the Report we are about to create can use it (the Group must be enumerated). Otherwise we end up with an empty Report when we go too fast. So a bit patience is needed here.

Lets start.

Procedure 01: Creating the Group

Open the SCOM Console with Admin Permissions;

Go to Authoring > Groups > right click, select Create a New Group;

Give it a logical Name with a solid description and save it to a NEW MP;

Click Next > click on the Dynamic Members link > click Create/Edit Rules;

In the drop down menu select Health Service > click the Add button > nothing else needs to be done > now it looks like this:

Click OK > now it looks like this: As you can see, the Query Formula field is almost empty. This speeds up the Group Calculation process and lessens the change that wrong queries are built, since these can really have a negative impact on the RMS;

Go to the Excluded Members section and add the Health Service Class of all the Management Servers;

Click Create. The Group is now created;

Check its members. It should be equal to all SCOM Agents WITHOUT the SCOM Management Servers.

Now go outside and relax for a while. Come back after ten minutes please :).

OK, you’re back? Ten minutes have passed? Time to move on to the next stage.

Friday, August 13, 2010

As mentioned before, I would write a posting about how to scope the SCOM R2 Report for certain Report Operators User Roles. This posting will cover that topic.

There are a few things to reckon with:

For starters, it is better to leave the User Role Operations ManagerReport Operators, which is available by default when SCOM R2 Reporting is installed, as it is. Do not use that User Role. Just create the Report Operator Roles as required and use those custom created User Roles as a basis for scoping the Report Operators role. This way there is always a way back when things act differently as expected.

Secondly, a valid backup mechanism of the related SSRS database (ReportServer is its default name) needs to be in place and functional since the scoping of the Views of the available SCOM R2 Reports is set within that DB. When that DB goes down and there is NO valid backup at hand of that DB, all the scoping of Views will be gone as well.

Thirdly, use the AGDLP approach here. Sometimes I see environments where they simply put a user into a User Role but that is NOT the way to go, only when you want to get an environment which runs out of control. So AGDLP it is.

Lets start. In this example I will create a SCOM R2 User Role Report Operators where they are only allowed to view the Server OS related reports. All the other Reports are not to be used by these people. For this three procedures are required:

Active Directory (AD) – Group creation and population

SCOM R2 – User Role ‘Report Operator’ Creation

SSRS – Security Configuration

Procedure 01: AD – Group creation and population

First create a Domain LocalGroup, using the naming convention of your company;

Create a Global group, also using the naming convention of your company;

Put the Global Group created in Step 2 into the Domain Local Group created in Step 1;

Put the users who need to have special Report Operator Access into Global Group created in Step 2.

OK, now we have covered the AD side of it all. Time to move on to the SCOM R2 Console.

Give it a proper name and a description. Add the Local Domain Group here since AGLP is the way to go;

Next > Create. When the new User Role is created open it again and go to the tab Identity and hit the button Copy. Now the ID is copied into the memory.

Now we have covered the SCOM R2 side of it all.

Lets see how far we are. We have created the Global Group ‘GG_SCOM_Report_Operators_ServerOS_Only’. This Global Group has been added to the Domain Local Group ‘DLG_SCOM_Report_Operators_ServerOS_Only’. In the SCOM R2 Console we have added this same group to the User Role ‘Report Operators – Server OS Reports Only’. And the user Test is a member of the earlier mentioned Global Group. And we have copied the ID related to this User Role.

Time to move on to the last and most important procedure. Without it, all previous actions are pointless.

Go to the tab Properties and hit the button New Role Assignment > paste the copied ID into the field Group or user name > select the Roles Browser and My Reports > OK; and

When we use the Test account now for opening the SCOM R2 Console you will see that ALL the Reports will be shown: Close the Console;

Go back to IE with the SSRS instance. Remember, we need to limit access to only the Server OS Reports. In order to achieve that we need to have the ID (Step 4, Procedure 02). So what we basically need to do is change the security on the folders which DO NOT contain the Server OS Reports. It is better NOT to alter the security on the folders ‘My Reports’, ‘Users Folders’ since these are directly related to the SSRS configuration and not to SCOM R2 Reporting.

Example: Click on the first folder which needs to be altered. In this case, I click on the folder Microsoft.SQLServer.2008.Monitoring > click on Properties > Security > Edit Item Security. This dialog box is shown now: This basically tells you are about to change the security settings which are inherited from its parent. Click OK;

Select the Group/User with the ID (Step 4, Procedure 02) by placing a checkmark in the box. Click Delete: and Click OK. Please note this button: so there is always a way back :) ;

Repeat Steps 5 and 6 for every Report this User Role is not allowed to access. Now open the SCOM R2 Console with the test user account and check out the Reporting Tree: Wow! That looks a lot better compared to the View in Step 3! Run a report in order to test it completely. Close the Console and you’re done.

Recap:

In this posting I showed how to scope the available Reports to certain User Roles. It can be done but it is labor intensive. Actions are needed in AD, SCOM R2 and SSRS.

Also know that by accessing the Reporting Server directly (http://servername/myreports), the security which has been set in SCOM and SSRS will be circumvented. So people still can run those ‘forbidden’ Reports.

However, the interface which SCOM R2 offers is not present so it can be a challenge for those people to get those reports running. For instance compare the SCOM R2 Reporting parameter area for the SQL Report ‘Top 5 Deadlocked Databases’

Veeam

NiCE

Search This Blog

Didacticum

Pageviews last month

Visitors to this blog:

Why this blog?

On an almost daily basis I work with Azure, OMS & System Center related technologies. At the moment my main focus areas are Azure, OMS, SCOM & SCCM.

Because I bump into many challenges I decided to start this blog, which has two main purposes: to help YOU with mastering these products by covering the undocumented features and last, but not least, as my personal - but open to any one - knowledge base.

From January 2010 on I have been rewarded with the MVP award and until now this this status is prolonged every year.

MVP AWARD

Follow me on Twitter

Disclaimer

The information in this blog is provided 'AS IS' with no warranties and confers no rights. This blog does not represent the thoughts, intentions, plans or strategies of my employer. It is solely my own personal opinion. All code samples are provided 'AS IS' without warranty of any kind, either express or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose.