SCOM 2007 R2

SNMP port claimed, but not by the snmp services.

Posted by rob1974 on October 24, 2013


I recently came across the following issue. SCOM was throwing errors every minute and auto closing them fast from this monitor: Network Monitoring SNMP Trap port is already in use by another program. 2 things I notice when I looked at it. The first one was it was a manual reset monitor how can that be auto closed? And the second one, it was about the management server itself and it’s inability to claim the SNMP trap receiver port (UDP 162), while I was sure I had disabled the SNMP services.

So what claimed the SNMP port? With 2 simple commands you can find out:

> netstat –ano –p UDP| find “:162”.

This resulted in some PID. Now check the PID versus tasklist /svc and we should have the process responsible for the claim.

> tasklist /svc /fi “pid eq <pidnumberfound>”

and it returned nothing.

After quite some rechecking my results in different ways I came to the conclusion that the UPD port really had been claimed by an process that didn’t exist anymore. I feel that’s it is some bug in Windows server, it should always close the handles whenever a process dies, but let’s be pragmatic about it > reboot server. After the reboot I ran the commands above again to find out the “monitoringhost.exe” claimed the port. W00t we solved the problem, SCOM can receive traps again.

As mentioned in the start of the post. The alerts were closing fast. And because of that fact, if someone saw it they wouldn’t pay any attention to it. SCOM wasn’t receiving any traps in the above condition, so why was the alert of a manual reset monitor being auto closed fast?

The explanation is quite simple: the recovery task for this monitor. This recovery task runs and disables the Windows SNMP services and will do a reset of the monitor. The problem with this is that the recovery didn’t do anything (the SNMP services were already disabled), but it did reset the monitor.

I think the solution should be rewrite of the monitor, where the monitor checks if SCOM really has claimed the SNMP port. I could do this, but for now I will leave it. It might have just been one of those exotic errors you’ll see once in a lifetime…

Posted in Management Servers, troubleshooting | Leave a Comment »

Scom 2012 Design for a serviceprovider.

Posted by rob1974 on October 24, 2013

In the last year we’ve created a (high level) design to monitor an unlimited amount of windows servers (initial amount around 20.000). However, halfway during the year it became clear that this  company decided to on a new architecture all together. Not only SCOM 2012 would become obsolete in this architecture, also offshoring monitoring would make Marc and me obsolete (not to worry, we both have a new job). As i feel it’s kind of a waste of our work i’ll blog the 2 things that I feel were quite unique in this design.

There’s still a SCOM 2007 environment running designed for around 10.000 servers. We’ve designed this as a single instance and it is monitoring 50+ customers. Those customers were seperated by scoping on their computers (see our blog and/or lookup the service provider chapter in operations manager 2012 unleashed). However, we never reached 10.000 servers due to a few factors. The first was the amount of number of concurrent consoles used. The number of admins needing access will increase with the amount of servers you monitor, however the “official” support numbers of Microsoft will decrease the number of concurrent consoles with the number of supported agent (50 console with 6000 servers, 25 with 10.000 server). Now this isn’t a really hard number, but keep adding more hardware will just shift bottlenecks (from MS to DB and from to disk I/O to network I/O) and of course the price of hardware goes up exponentially.

The second issue we had, we had to support a lot of management packs and versions. We supported all Microsoft applications from windows 2000 and up. So we didn’t only run, Windows 2000, 2003 and 2008, but all versions for DHCP, DNS, SQL, Exchange, Sharepoint, IIS etc, etc. Some of the older ones still are the dreaded converted mp’s, some of which we had to  fix volatile discoveries ourselves. By the way, I’ve been quite negative about how Microsoft developed their mp’s and especially the lack of support on them, but the most recent mp’s are really done quite well. MS keep that up!

In the new design we’ve used a different approach. We decided not to aim for the biggest possible instance of SCOM, but to go to several smaller ones, plotted initially over different kind of customers my former company has to support. Some customers only needed OS support, some OS + middleware and others required monitoring of everything.

Based on our experience we decided to create only one technical design for all the management groups and have a support formula for the number of supported agents. Keep in mind this is not set in stone; it’s just to give the management an estimate about the costs and insight in the penalty for keep adding new functionality (mp’s) to the environment. All the management groups were still to hold more then one customers, but one customer should only be in one management group to be able to create distributed applications (automatic discoveries should be possible).

This is what we wrote down in the design:

The amount of agents a management group can support depends on a lot of different factors. The performance of the management servers and the database will be the bottleneck of the management group as a whole. A number of factors influence the performance e.g. number of concurrent consoles, number of management packs, health state of the monitored environments.

When a management group reaches it thresholds a few things can be done to improve performance:

1. Reduce the amount of data in the management group (solving issues and tuning, both leading to less alerts and performance data).

2. Reduce the console usage (reduces load on management servers and databases)

3. Reduce the monitored applications (less management packs).

4. Increase the hardware specs (more cpu’s, memory, faster disk access, more management servers).

5. Move agents to another management group (which reduces load in terms of data and console usage)

We’ll try to have 1 design for all management groups, so the hardware is the same. However the number of management packs is different in each management group. To give an estimate how this affects the number of supported agents per management group we’ve created a support formula.

The formula is based on 2 factors, the number of concurrent consoles and the number of management packs[1]. The health of the monitored environments is taken out of the equation. The hardware component is taken out as well but assumed to be well in line with the sizer helper of SCOM design guides from Microsoft. With less than 30 consoles and <10 monitored applications/operating systems the sizing is set to 6000 agent managed systems (no network, *nix or dedicated URL monitoring).

The tables for penalties for the number of consoles and managements packs and the resulting formula are given below. Please note, the official support limit by Microsoft for the number of consoles is set to 50.

consoles penalty (%) management packs penalty (%)
0 0 0 0
5 0 5 2
10 0 10 4
15 0 15 7
20 0 20 11
25 0 25 16
30 0 30 23
35 2,5 35 32
40 5 40 44
45 7,5 45 60
50 10 50 78
55 30 55 85
60 50 60 90
65 95
70 97
75 99

Number of supported agents = (1-mppenalty/100)*(1-consolepenalty/100)*6000

This results in the following table with shows the number of supported agents.

horizontal = concurrent consoles

vertical = number of management packs

<30 35 40 45 50 55 60
0 6000 5850 5700 5550 5400 4200 3000
5 5880 5733 5586 5439 5292 4116 2940
10 5760 5616 5472 5328 5184 4032 2880
15 5580 5441 5301 5162 5022 3906 2790
20 5340 5207 5073 4940 4806 3738 2670
25 5040 4914 4788 4662 4536 3528 2520
30 4620 4505 4389 4274 4158 3234 2310
35 4080 3978 3876 3774 3672 2856 2040
40 3360 3276 3192 3108 3024 2352 1680
45 2400 2340 2280 2220 2160 1680 1200
50 1320 1287 1254 1221 1188 924 660
55 900 878 855 833 810 630 450
60 600 585 570 555 540 420 300
65 300 293 285 278 270 210 150
70 180 176 171 167 162 126 90
75 60 59 57 56 54 42 30

Again we never put this to the test. But those figures resulted in our goals for this design. One management group for OS monitoring to hold 6000 servers, one for middleware to hold around 3500 and full blown monitoring for monitoring of around 2000 servers in one management group. Please note our first remark about tuning and solving alerts, those things are still very important for a healthy environment!

The next part that made our design stand out is the ability to move agents or actually entire customers from one management group to another. Now this is of course always possible, but we wanted to do this automated and keep all customizations if applicable. The reasons for such a move would be sizing/performance of the management group, mp support (e.g. the customer starts using a new OS, which isn’t supported in mg he is in), contract change with the customer (e.g. from OS support to middleware support or the other way round).

In order to be able to keep all customizations we’ve created a mp structure/naming convention:

jamaOverride<mpname> = holds all generic overrides to <mpname>. This mp is sealed and should be kept in sync between all management groups holding the <mpname> as it contains the enterprise monitoring standards

jama<customer>Override<mpname> = holds all customer specific overrides to <mpname> and/or a change to the generic sealed override mp. This is of course unsealed as it holds guids of specific instances.

jama<customer><name> = holds all specific monitoring rules for a customer and DA’s. This is not limited to just 1 mp, normal distribution rules should apply. In other words don’t create a big “default” mp but keep things grouped based on “targets” of the rules/monitors.

Now when we need to move a customer we can just move an agent to a new management group and import all the  jama<customer> mp’s in that mg. Of course this is only applicable if the new mg supports the same mp’s as the old one. jamaOverride<mpname> should already exists as it just contains enterprise wide overrides to classes. The jama<customer>Override<mpname> would contain specific instances. However specific instances get a guid on discovery. If you remove an agent and reinstall and agent you get a new guid, so the overrides wouldn’t apply anymore. Fortunately SCOM supports multihoming and one of the big advantages is cookdown. Cookdown will only work when the same scripts are using the same parameters, so this means that the agent’s guids must be the same when multihomed as that guid is a parameter for many datasources.

To make sure the guid is kept, a move between management groups needs to use this multihoming principle and a move would become: Add an agent to the new management group, make sure all discoveries have run (basically time/maybe a few agent restarts) and finally remove the old management group. Because this is a relative easy command (no actual install) the add and remove can be done via a SCOM task (add the “add new mg task” to the old mg and the “remove old mg” to the new mg).

Again we never built this and we don’t expect to be as smooth as we described here (always some unexpected references in an override mp :)), but hopefully it is to some use to someone.

[1] 1 management pack equals 1 application or OS. E.g. windows2008 is counted as 1 management despite it are 2 management packs in scom (discovery and monitoring).

Posted in general, management packs, Management Servers | Leave a Comment »

HP Proliant MP discovery is missing Networks, Storage and Management Processor

Posted by MarcKlaver on April 25, 2013


Recognize the above situation?

When you installed the HP Proliant Management Pack and used the SNMP based agents, some discoveries do not seem to work. To solve this, you must also configure your SNMP settings on your server. In order for the SNMP based agents to read all required information, an SNMP community string must be created.

You should configure two things within the SNMP Service Properties:


Under “Service” allow all.



  • Under “Security” add a read-only community string and make sure that SNMP packets are accepted from the “localhost” host. Note that the :”Send authentication trap” option is not required, but optional.
  • Restart your SNMP Service
  • Restart the SCOM service

And after a few minutes the result should look like this:


When the HP Agents start, it will read the SNMP settings and use it to access the SNMP based information from the agent Smile.

Posted in management packs | Leave a Comment »

Configure Custom Service Monitoring, an alternative method.

Posted by rob1974 on June 12, 2012

We are running SCOM in a “service provider” solution. When you do this you want to have standardized environments and tune everything in a generic manner. However, our main problem is every customer is different and has different requirements. Simply changes like disk threshold or setting up service monitoring for specific services on specific computers can be quite time consuming. Also, it can have a big impact on performance of the environment (a simple override to one Windows 2008 logical disk gets distributed to all Windows 2008 servers!).

Of course, we could have authors and advanced operators for to do this job, but there’s no way of auditing on changes or forcing advanced operators to use certain override mp’s only. We’re convinced this will lead to performance issues and dependency problems because overrides will be saved everywhere. So we believe this can only be done by a small amount of people, who really understand what they are doing.

To counter this we use an alternative way for setting overrides for common configurations and overrides. We create a script that reads a registry key for a threshold value to use or as I will show in this blog a configuration value. An example of a threshold value: we’ve created a disk free space script that uses a default value (both % as an MB value), but when a override is present in the registry it will use that value instead. The registry override in turn can be set with a SCOM task with just normal operator rights (you can restrict task access for operators in their SCOM profile so not every operator can do this). The change is instant and without configuration impact on the SCOM servers.

Now to the service monitoring example of this principle. What we’ve done is create a class discovery that checks for a certain registry key being present. If it is present it will target 1 monitor and 1 rule to that class. The monitor runs a script every 5 minute and checks the registry key for service names. Every service name is this registry key will be checked for a running state. If one or more services aren’t in a running state the monitor will become critical and an alert will be generated. When all services are in a running state again, the monitor will reset to healthy and close the alert.

By running a task from the windows computer view you can setup the monitoring for the first time:


Overriding the task with the service name (not the displayname) will add the service to the registry.


When the discovery has run the found instance will be visible in the jama Custom Service Monitoring view and more tasks will be available. When it really is the first service it might take up to 24 hours before the instance is found as we’ve set the discovery to a daily interval. But you can always restart the system center management service to speed up the discovery.

The new tasks are:

- Set monitored service. Basically the same tasks as the one available in the computer view, just the target is different. It can add additional services to monitor without any impact on the SCOM backend and this service will be monitored instantly as the data source will check the registry each run.

- List monitored service. Reads the registry and lists all values.

- Remove monitored service. Removes a service from the registry key and delete the key if the service was the last value. When the key is deleted the class discovery removes the instance next discovery run. Overriding the key with “removeall” will also delete the key.

- The “list all services on the computer” task doesn’t have real value for this management pack, just added for checking a service name from the SCOM console.

See below for some screenshots of the tasks and health explorer.


Task output of “List monitored services”:


The health explorer has additional knowledge and show which services have failed through the state change context:





So what’s the benefit of all this:

- The SCOM admins/authors don’t have to make a central change, except for importing the management pack.

- The support organization doesn’t have to log a change and wait for someone to implement it, but they can make the change themselves with just SCOM operator rights (or with rights to edit a registry key on a computer locally) and it works pretty much instant.

- From a performance view the first service that is added will have some impact on the discovery (config churn), but additional services don’t have any impact.

However, there’s still no auditing. We’ve allowed this task to what we call “key users” and we have 1 or 2 key users per customer. This could give an idea of who changed it when you’d set object access monitoring on the registry key.

The performance benefit for this monitor is probably minimal. However using this principle for disk thresholds gives a huge benefit as that’s a monitor that is always active on all systems and overriding values through a registry key on the local system removes all override distribution (I might be posting that mp as well).

When you want to checkout this mp, you can download the management pack jamaCustomService here. I’ve uploaded it unsealed, but I recommend it to seal it if you really want to put this to production.

Posted in general, management packs | Tagged: , , | Leave a Comment »

Website monitoring, another gotcha!

Posted by rob1974 on February 13, 2012

Recently I had an issue with website monitoring in a SCOM demo environment. I had configured a website test (through the template) every 2 minutes. I had created a DNS zone hosting the FQDN for this website. Then I paused the DNS zone and waited for the HTTP test to fail. I expected the HTTP test to fail and have an alert in SCOM within 2 minutes. However, after 10 minutes I had nothing… Not really what you want when you do a live demo.

So what was happening here? Some of you might already got it, it’s DNS caching of the client running the HTTP test. So how to stop this? Well there are 3 things you can do.


1. Default the DNS Client service will be started on a windows machine. Simply stopping the DNS Client service and the caching will stop (dns queries will still resolve).

2. Increase the frequency of the HTTP test. Anything more than 15 minutes will do…

3. Decrease the default cache time for the queries to something less than your test frequency.


As I was giving a demo, option 2 was not an option for me. But I would seriously thing about this when I do website tests in production. Obviously service level agreements should play a part in this, but a delay of max 30 minutes on a SLA of 8 hours would definitely be acceptable for me.

Option 1 was no option either. Beside caching the dns client service also registers domain joined hosts in dns, so not something I would recommend either. Besides caching helps performance wise, not sure if I ever wanted to disable this.

So left with option 3, but how to do this? In HKLM\SYSTEM\CurrentControlSet\services\Dnscache\parameters create the DWord MaxCacheTtl (and MaxNegativeCacheTtl if it’s not already on 0 with “NegativeCacheTime”) and give it a value of below the frequency of the website test. For a 2 minutes test I used a value of 90 (seconds).


Normally I think option 2 will be the best to go for. No use to run tests more than you would need to. However if you have a dedicated host for running website tests and you run those tests more often than every 15 minutes, consider reducing the max. cache time of the dns client.

Posted in general, troubleshooting | Tagged: , , | Leave a Comment »

Stop storing data (partial or temporary) into the data warehouse database

Posted by MarcKlaver on January 19, 2012

In order to facilitate the use of the data warehouse database, there are 3 default overrides for an environment that has it’s data warehouse enabled.


If you (partial or temporary) need to stop storage to the data warehouse, you can just override the default overrides (again) to set the Drop Items parameter to true. This will, after propagation to the management server, cause the items to be dropped (and not stored into the data warehouse database).

Note that while this is possible, I assume it is a non supported configuration Smile

Posted in Management Servers | Leave a Comment »

Agent proxy

Posted by MarcKlaver on July 1, 2011

Until now we have set the Agent proxy for an agent only when required and we used a script to do this. See this link for more information. But now Microsoft has come up with something new in the Exchange 2010 management pack. It will not discovery anything until you have set the agent proxy on for the Exchange servers (so we can’t do this afterwards anymore). So this meant for us we need to make a choice:

  1. Manually enable the proxy agent setting for all Exchange 2010 servers (now and in the future). Which means an Exchange 2010 server will not be discovered until we actually do.
  2. Enable the proxy agent for all agents.

Counting at the moment around 60 percent of the agents already has the proxy functionality enabled. So what’s the advantage of not setting this setting on default for all agents? Looking at security, you have to enable this setting already for all security important servers (AD, Exchange, ISA, Citrix, etc.). And since we have no knowledge of when an Exchange server is connected to our environment, we decided to enable it for all agents.

This is the script to do it:


# Add operations manager snapin and connect to the root management server.
add-pssnapin "Microsoft.EnterpriseManagement.OperationsManager.Client";
set-location "OperationsManagerMonitoring::";
new-managementGroupConnection -ConnectionString:$rootMS;

## set proxy enabled for all agents where it is disabled
$NoProxy = get-agent | where {$_.ProxyingEnabled -match "False"}
$NoProxy | foreach {$_.ProxyingEnabled = $true}
$NoProxy | foreach {$_.ApplyChanges()}

See also Kevin Holman’s blog

Posted in Agent Settings | 2 Comments »

State changes from disabled monitors

Posted by MarcKlaver on May 18, 2011

While we try to reduce the number of state changes, we stumbled into a bug in the agent software. We were investigating our top 50 of state change monitors, using Kevin Holman’s queries.

When we looked at the results, we did see some strange things. First, the number of state changes were equal for a lot of monitors.

An example of this:


Having exact the same number of state changes for two different monitors? When we looked into these monitors, they were disabled by default in the management pack were the monitor was defined. Digging further showed that there was no override present in the system, which would enable this monitor.

We now had a problem while from our top 50, 48 of them were monitors that were default disabled, without any override to enable the monitor.

This turns out to be a bug in the agent software. The moment the agent (re-) initializes either by starting or coming out of maintenance mode it will detect the monitor and initialize it. When realizing it should (default) disable the monitor, it will send a state change for the the disabled monitor.

This is also the case for monitors that are default enabled, but are disabled using a custom override. Unfortunately a fix for this issue is not as easy as it sounds (according to Microsoft support) and a fix will not be realized in the R2 version of SCOM.

So if you have tuned a lot (like us) and find these monitors, just skip them. You can’t fix this one :)

Below is a list of monitors we found that were default disabled during this investigation and that we now exclude from the query.



If you want to exclude any (default) disabled monitor that you found, exclude it in the query as shown below:

use OperationsManager

select distinct top 50 count(sce.StateId) as NumStateChanges, m.MonitorName, mt.typename AS TargetClass
from StateChangeEvent sce with (nolock)
join state s with (nolock) on sce.StateId = s.StateId
join monitor m with (nolock) on s.MonitorId = m.MonitorId
join managedtype mt with (nolock) on m.TargetManagedEntityType = mt.ManagedTypeId
where m.IsUnitMonitor = 1
and m.MonitorName not in (



group by m.MonitorName,mt.typename
order by NumStateChanges desc

Posted in Agent | Leave a Comment »

What property is discovered?

Posted by MarcKlaver on May 17, 2011

I think we all used these Kevin Holman queries to handle the config churn in our environment. But what if you still have config churn issues, but don’t see any issues with these queries.

We still had config churn, but running the queries form Kevin Holman did not point us to the reason for those config churns. But we got some information with a support case.

First we can retrieve exact what is changed, using this query (run on against de data warehouse database):

use OperationsManagerDW

select * from dbo.ManagedEntityProperty
where DWCreatedDateTime > dateadd(hh,-24,getutcdate())
order by DWCreatedDateTime, ManagedEntityRowId

This will result in output similar to this:


It will give you all properties that are changed within the last 24 hours and what exactly is changed. Now when you “click” on a PropertyXML value or DeltaXml, a new windows will be opened showing you exact which properties there are and which are changed.

But now we don’t have any idea were to find this in a management pack, but we will get there. From the above output, take the ManagedEntityRowId and place this in the next query:

use OperationsManagerDW

select * from ManagedEntity
where ManagedEntityRowId = 121403

This will result in output similar to this:


The ManagedEntityGuid is what we need here. Place it in the next query (which will run against the operations database):

use OperationsManager

select * from BaseManagedEntity
where BaseManagedEntityId = ‘3B9F6E60-02B5-8369-859F-8047093CE33F

The result is:


The next thing we need is the BaseManagedTypeId

use OperationsManager
select * from ManagedType
where ManagedTYpeId = ‘10C1C7F7-BA0F-5F9B-C74A-79A891170934

Which results in:


Here you can find the TypeName that is discovered (Microsoft.SQLServer.Database in this case). Use the ManagementPackId to get the actual management pack:

Use OperationsManager
Select * from ManagementPack where ManagementPackId = ‘BCD6DCCF-C46C-A1F5-3C8D-BB4E99E2A6A3

And the final result will be:


So we now know that the property is of type “Microsoft.SQLServer.Database” and that it is discovered in the “Microsoft.SQLServer.Library” management pack (aka “Microsoft SQL Server Core Library”).

Note that if you are only interested in the actual management pack name, you can also use this query (which uses the ManagedEntityGuid from the first query against the operations database):

use OperationsManager
Select * from ManagementPack
where ManagementPackId = (
    select ManagementPackId from ManagedType
    where ManagedTYpeId = (select BaseManagedTypeId from BaseManagedEntity
        where BaseManagedEntityId = ‘3B9F6E60-02B5-8369-859F-8047093CE33F

This will result in the same output as the last screenshot (but now you don’t know the type for the data). When you have this information, you can look up the corresponding discoveries so you can fine tune them if required.

Posted in config churn | Leave a Comment »

Updating manual installed agents from the console

Posted by MarcKlaver on May 9, 2011

We have created a management pack, that will update the manual installed agents, from a task in the operations console. However, before you can use this management pack you should have implemented the JDF framework for file distribution. This management pack depends on the framework, so if you are not capable of setting up the framework, this management pack is useless to you.


Below is the final XML file for your management pack called jamaAgent.Update.xml:

<?xml version="1.0" encoding="utf-8"?><ManagementPack ContentReadable="true" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
      <Reference Alias="MSWL">
      <Reference Alias="MSSCL">
      <Task ID="jamaAgent.Update.ConsoleTask.AgentUpdate" Accessibility="Public" Enabled="true" Target="MSSCL!Microsoft.SystemCenter.HealthService" Timeout="300" Remotable="true">
        <WriteAction ID="PA" TypeID="MSWL!Microsoft.Windows.ScriptWriteAction">
          <Arguments />
' File   : jamaTaskUpdateAgent.vbs
' Use    : Script for the jama Agent Update task.
' SVN    : Revision: 136
'          Date: 2011-04-12 09:14:33 +0200 (Tue, 12 Apr 2011)
' Note(s): 1) ---
option explicit
on error goto 0

const INT_RETRIES           = 5
const STR_DOWNLOAD_PATH     = "/files/opsmgr/updates/agent/"


' jamaMain
' Use    : Main entry for this script.
' Input  : ---
' Returns: ---
' Note(s): 1) ---
function jamaMain()
    dim jdf
    dim iMinutesToWait
    dim bForceUpdate

    if(not jdfLoadFramework(jdf, null, null, null, null, null)) then
        wscript.echo "The Jama Distribution Framework could not be loaded. No update is performed."
        wscript.echo "Jama Distribution Framework loaded" & vbNewLine & vbNewLine & jdf.GetInfoString(wscript.fullname)
        ' Your code here!
        if(jamaInitialize(iMinutesToWait, bForceUpdate)) then
            if((bForceUpdate = false) and jamaRestartRequired()) then
                wscript.echo "Update could not be scheduled due to dependent services." & vbNewLine & _
                             "Use the force option to override this behaviour and force the update to run."
                if(jamaScheduleUpdateIn(jdf, iMinutesToWait) = true) then
                    wscript.echo "Update is scheduled to run in " & iMinutesToWait & " minutes."
                    wscript.echo "Update could not be scheduled. No update will be performed!"
                end if
            end if
            wscript.echo "Initilization failed. No update will be performed!"
        end if
    end if
end function

' jamaInitialize
' Use    : Initialize the script.
' Input  : iMinutesToWait - integer - Number of minutes to wait before
'                                     activating the schedule (output)
' Returns: Boolean - TRUE  - No errors detected.
'                    FALSE - An error was detected.
' Note(s): 1) ---
function jamaInitialize(byref iMinutesToWait, byref bForceUpdate)
    dim colNamedArguments
    dim bResult
    dim bFound

    bResult = false
    bFound  = false
    bForceUpdate = false
    set colNamedArguments = WScript.Arguments.Named

    if(colNamedArguments.Exists("minutes")) then
        iMinutesToWait = cint(lcase(colNamedArguments.Item("minutes")))
        bFound         = true
        bResult        = true
    end if

    if(colNamedArguments.Exists("force")) then
        if(lcase(colNamedArguments.Item("force")) = "true") then
            bForceUpdate = true
        end if
    end if

    if(not bFound) then
        iMinutesToWait = INT_DEFAULT_WAIT_TIME
        bResult        = true
    end if

    jamaInitialize = bResult
end function

' jamaRestartRequired
' Use    : Checks if an update of the agent will trigger a restart or reboot.
' Input  : ---
' Returns: Boolean - TRUE  - A restart is required.
'                    FALSE - A restart is not required.
' Note(s): 1) If the dependency could not be determined, this function will
'             return true.
'          2) For windows 2000 this function will always return true.
'          3) When true on a 2003 server, a reboot is required but not forced.
'          4) When true on 2008 or higher, the dependent services could be
'             restarted by the installer service.
function jamaRestartRequired()
    dim strCmd
    dim iResult
    dim bResult
    dim fso
    dim fh
    dim strTempFile
    dim strLine

    strTempFile = "jamaTaskUpdateAgent.$$$"
    bResult = true
    strCmd  = "tasklist /fo csv /m EventCommon.dll /FI ""imagename ne HealthService.exe"" /FI ""imagename ne MonitoringHost.exe"" > " & strTempFile

    set fso = CreateObject("scripting.filesystemobject")
    if(fso.FileExists(strTempFile)) then
        on error resume next
            set fh = fso.OpenTextFile(strTempFile, 1)                           ' Open for reading.
            if(err.number = 0) then
                do while(not fh.AtEndOfStream)
                    strLine = lcase(fh.ReadLine)
                    if(instr(strLine, "info: no tasks") > 0) then               ' No dependencies found.
                        bResult = false
                    end if

                set fh = fso.GetFile(strTempFile)
                fh.delete(true)                                                 ' delete file.
            end if
        on error goto 0
    end if

    jamaRestartRequired = bResult
end function

function jamaRun(byval strCmd)
    dim objShell
    dim iResult

    iResult      = 0
    set objShell = wscript.createObject("wscript.shell")
    iResult      = objShell.run("cmd /c " & strCmd, 0, true) ' hidden and wait for result
    set objShell = nothing
    jamaRun      = iResult

    jamaRun = iResult
end function

' jamaScheduleUpdateIn
' Use    : Retrieve and schedule the required update.
' Input  : jdf            - object  - JDF object
'          iMinutesToWait - integer - Minutes to wait for the schedule.
' Returns: Boolean - TRUE  - No errors.
'                    FALSE - An error was detected.
' Note(s): 1) ---
function jamaScheduleUpdateIn(byref jdf, iMinutesToWait)
    dim strSourceFile
    dim strTargetFile
    dim strTargetDir
    dim iCount
    dim iResult
    dim bResult
    dim strCmd
    dim fso

    iCount = 0
    bResult = false
    set fso = CreateObject("scripting.filesystemobject")
    strSourceFile = "jamaAgentUpdate" & jdf.Platform & ".exe"
    strTargetDir  = jdf.ExpandString("$JDF_IN_DIR$" & "jamaAgentUpdate\")
    if(jdf.CreateDirectory(strTargetDir)) then
        strTargetFile = strTargetDir & strSourceFile
        bResult = fso.FileExists(strTargetFile)
        if(not bResult) then
                iResult = jdf.GetFile("jdfBaseDistribution", STR_DOWNLOAD_PATH & strSourceFile, strTargetFile)
                iCount  = iCount + 1
            loop while((iCount < INT_RETRIES) and (iResult <> 0))
        end if
        if(iResult = 0) then
            bResult = fso.FileExists(strTargetFile)
            if(bResult) then
                strCmd  = strTargetFile & " /verysilent"
                bResult = jdf.ScheduleTaskIn(strCmd, iMinutesToWait)
            end if
        end if
    end if
    jamaScheduleUpdateIn = bResult
end function

' From template.vbs
' jdfLoadFramework
' Use    : Load and initialize the JDF framework.
' Input  : objJDF               - object - Object passed back with framework.
'          bInitialize          - bool   - true or null for initializing.
'          strVersion           - string - Minimum framework version required.
'          bForceVersion        - bool   - true, false or null.
'          strCustomerId        - string - Customer id or null.
'          strDefaultUploadPath - string - Default upload path.
' Returns: bool - TRUE  - Initialization of the framework succeeded.
'                 FALSE - Initialization of the framework failed.
' Note(s): 1) strVersion = null
'                 Any version of the framework will be accepted. The value of
'                 strForceVersion will be ignored.
'          2) strVersion = "x.x.x"
'                 Integer values, seperated by dots, e.g. : "3.5.11"
'          3) bForceVersion = null/false
'                 The given version is a minimum version required for the
'                 framework.
'          4) bForceVersion = true
'                 The framework version must exactly match with the given
'                 version number.
'          5) strCustomerId = null
'                 The customer id will be retrieved from the registry. See
'                 the documentatiohn for more information.
'          6) strDefaultPath = null
'                 The default upload path will be set to the root: "/". This
'                 argument can hold JDF variables (both default and custom
'                 provided in the jdf.jdp file). See the documentation for more
'                 information.
'          7) Normal use is:
'                 jdfLoadFramework(jdf, null, null, null, null, null)
'                 where jdf is the object you pass to the function.
function jdfLoadFramework(byref objJDF, byval bInitialize, byval strVersion, byval bForceVersion, byval strCustomerId, byval strDefaultUploadPath)
    dim bResult, fso, strFrameworkFile, objReg, strResult

    bResult   = false
    strResult = ""
    on error resume next
        set objReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\.\root\default:StdRegProv")
        if(err.number = 0) then
            objReg.GetStringValue  &H80000002, "SOFTWARE\Company\jama\jdf", "path", strResult     ' If you use another registry location, change this value!
            if((err.number = 0) and (strResult <> "") and (not isnull(strResult))) then           ' We got something, so try it.
                strFrameworkFile = strResult & "jdf.wsc"                                          ' This now should be the full path to the framework file.
                set fso = CreateObject("Scripting.FileSystemObject")
                if(err.number = 0) then
                    if(fso.FileExists(strFrameworkFile)) then
                        set objJDF = GetObject("script:" & strFrameworkFile)                      ' File exist, try to load it.
                        if(err.number = 0) then                                                   ' Framework found and object loaded.
                            if(bInitialize = false) then                                          ' No initialization.
                                bResult = false
                            else                                                                  ' Initialize the framework for use.
                                bResult = objJDF.InitializeFramework(strVersion, bForceVersion, wscript.scriptname, strCustomerId, strDefaultUploadPath)
                            end if
                        end if
                    end if
                end if
            end if
        end if
    on error goto 0
    jdfLoadFramework = bResult
end function
    <LanguagePack ID="ENU" IsDefault="false">
        <DisplayString ElementID="jamaAgent.Update">
        <DisplayString ElementID="jamaAgent.Update.ConsoleTask.AgentUpdate">
          <Name>jama Agent Update</Name>

Now the script in this management pack expects a few things.


const STR_DOWNLOAD_PATH = "/files/opsmgr/updates/agent/"

This constant is used to specify the path on the remote SCP server, where the update files can be retrieved. If you use another location, change the value of the variable.


strSourceFile = "jamaAgentUpdate" & jdf.Platform & ".exe"

This value is used to generate the name of the update to retrieve. We have re-packaged the two update files required for the agent update into a single executable (see also this article from Kevin Holman for more information). You can download an archive with three packages (for all suported platforms) on this location. If you create you own, just make sure the final name of the update package corresponds with the final strSourceFile variable value:

jamaAgentUpdateIA64.exe –> For the itanium platform
jamaAgentUpdateX64.exe –> For the x64 platform
jamaAgentUpdateX86.exe –> For the x86 platform


What we actually do is very simple.

  • We detect the platform we are running on and construct the correct file name of the file to retrieve (this update package contains the required .msp files for the agent).
  • We retrieve the update package.
  • We schedule the update package to run in ## minutes and to run unattended.
  • We hope for the best :)

If something goes wrong with the installation, the failure can be found in the log file created by the update. There are two logfiles:


Also the package itself will again detect we if we are running on the correct platform and only start the update if the package is the correct platform version.


As you can see in the .XML file, we use the jdfBaseDistribution account for retrieving the file. So you don’t need to configure the SCP server for every customer you have. Just make the file available for the jdfBaseDistribution account. The task itself is targeted against the health service. Select the “Agent By Version” leaf in the console, to get an overview of all your agents and their version:


If you select a health service, the task pane will show you the new update task:


Note that the task is not version aware, so it will always be available and will run. So you can do an update over a current installed agent (which does work, without issues). After selecting the task, the tasks pane will be shown and you can change the arguments for the script.


If you don’t change anything, the task will try to schedule the update to run in 120 minutes. If you require the another time, use the /minutes:## argument. The task will be scheduled ## minutes in the future. So if I want the update to run within 5 minutes:




The first thing the task does is checking for dependencies (for more information about the dependencies, see this link). If no dependencies are found, the job is scheduled ## minutes in the future (or 120 if no override is given):




If a dependency is found, the tasks will not schedule the update:


But as the output shows you can override the dependency detection, by adding the /force:true option in the argument list. If set the update will always run, even if there are dependencies detected.


After the update is finished, the agent should reflect the new version information:


You now can update your manual installed agent, using the operations console :)

Posted in Agent | Leave a Comment »


Get every new post delivered to your Inbox.