Plugin versions on agent does not support target type rac_database


, , ,

I wasn’t able to promote database targets after I applied the July bundle patch.  I quickly opened an SR and received the following excellent, if scary, advice.   The solution, as you’ll see, involves directly manipulating data in the SYSMAN schema.

The Error Message

“Plugin versions on agent does not support target type rac_database”.

In this case the agents appeared to have a later release of the database plugin than the management server.   A quick check of the OMS and agents through the EM console showed that I was fully deployed with 12.0.8.  Everything’s fine, it just doesn’t work.

Evaluating the Problem

It seems my production OEM environment had entered some sort of quantum state, but I’m confident quantum computing wasn’t in this bundle patch.  Something must be seriously wrong.  That’s where Oracle Support came to the rescue.

The SYSMAN schema contains all sort of data and database objects behind the OEM application, including a detailed listing of acceptable plugin ranges – the min and max version your system will allow you to manage.

select * from em_target_types where type_resource_bundle like 'oracle.sysman.db.rsc';


There’s my problem.  The max plugin version for RAC database was set to!

Action Plan (Thanks to Oracle Support)

0. Take a complete backup of repos DB  [ I chose to do a guaranteed restore point ].  The fix involves recompiling a hand-edited version of three packaged procedures, their dependent SQL, and several other objects.  Backup your OMR.

1. Navigate to <MW_HOME>/plugins/oracle.sysman.db.oms.plugin_12. path on the OMS server.

i.e. ls -lrt has_data_upgrade_*

ls -lrt cluster_data_upgrade_

ls -lrt rac_database_data_upgrade_*

2. Find the last data upgrade sql file for has, cluster and rac_database.  In this case there were six SQL files for each plugin, so I edited has_data_upgrade_6.sql, etc.


3. Make a back up those each of those files.  We’ll edit and run these copies.

i.e. cp cluster_data_upgrade_6.sql cluster_data_upgrade_6_bkp.sql

4. Edit each backup file from step 3.  The two edits per file are shown in red below:  correct the plugin version ID and add a commit statement at the bottom of each file.

 l_type_property_list SMP_EMD_NVPAIR_ARRAY;
 l_type_property SMP_EMD_NVPAIR;

l_type_property_list := SMP_EMD_NVPAIR_ARRAY();
 l_type_property := SMP_EMD_NVPAIR( 'inherit_cred_metadata',
 l_type_property_list(l_type_property_list.count) :=
 l_type_property := SMP_EMD_NVPAIR(
 l_type_property_list(l_type_property_list.count) :=
 l_type_property := SMP_EMD_NVPAIR( 'use_parent_creds',
 l_type_property_list(l_type_property_list.count) :=
 l_type_property := SMP_EMD_NVPAIR( 'TargetVersion',
 l_type_property_list(l_type_property_list.count) :=

 p_target_type_in => 'oracle_pdb',
 p_type_meta_ver_in => '1.4',
 p_parent_target_type_in => NULL,
 p_parent_type_meta_ver_in => NULL,
 p_type_property_list_in => l_type_property_list,
 p_type_display_name_in => 'Pluggable Database',
 p_type_display_nlsid_in => 'oracle_pdb_title',
 p_type_resource_bundle_in => 'oracle.sysman.db.rsc',
 p_monitored_by => '2',
 p_plugin_id => 'oracle.sysman.db',
 p_plugin_version_id => '', 
 p_category_prop_defs => NULL,
 p_ocm_gc_merged => '1',
 p_type_relationship => 1,
 p_client => 'OTHERS',
 p_plugin_min_version_id => '',
 p_product_id => NULL);
.. lots of other stuff

5. Run back up scripts from step 4 on the repository database as the sysman user:


6. Running the following query on the repository database as the sysman user, verify that the Cluster, Cluster Database and Oracle High Availability Service targets MAX_PLUGIN_VERSION is

 select * from em_target_types where type_resource_bundle like 'oracle.sysman.db.rsc';

Turn OEM Job Output into Lists


, , , , ,


We have an OEM job that we run after each system-wide Unix password change to verify success on all hosts.

The OEM Job is run against Dynamic Groups of hosts.  In this example I’ve limited it to the group DBHosts.


The job consists of a very simple call for id


The Credentials for the job contain the new password, of course.

The job quickly tries to connect to each host with that named named credential and either succeeds or throws an error.  The Job’s screen output provides immediate feedback to the operator, but I need to share the list of exceptions with Operations so they can correct it.

Gathering Data

Job results are stored in sysman.mgmt$job_step_history. We can run a simple query to find our exceptions.  The list I sent to Operations excluded the end_time and status.

SELECT target_name,
FROM sysman.mgmt$job_step_history
 AND  status NOT IN ( 'Succeeded' )
 AND  end_time > SYSDATE - 7
ORDER BY target_name;


SELinux blocked my .Xauthority


, , , , ,

I was attempting to install an OEM management server on a new host in the lab using runInstaller.  Of course the installer is an X-windows app so I need to configure port forwarding to get the display back to MacBook.

I added the new host and its bastion to my ~/.ssh/config file to set up port forwarding:

ConnectTimeout 60
StrictHostKeyChecking ask
ProxyCommand none
UserKnownHostsFile ~/.ssh/known_Hosts
User oracle

ProxyCommand ssh -W %h:%p
StrictHostKeyChecking no
UserKnownHostsFile ~/.ssh/known_Hosts
VisualHostKey no
ForwardX11 yes
ForwardAgent yes
User oracle

Pretty straight-forward and it’s worked plenty of times before, so I expected no problems.


When I ssh’d to newlaboms I was hit with an xauth error:

xauth:  timeout in locking authority file /home/oracle/.Xauthority

Quick solutions include ensuring proper ownership of my home directory (no problem), that I could write a new file there (touch temp.file — ok), and adequate space on the home’s file system (no problem).

My ‘id’ line looked wrong:

uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

All that “context=” stuff was added by SELinux.  This is a new host and SELinux is enabled by default and disabled by my S/A’s as part of the build process.  They’d missed it this time – and this is how you check:

> /usr/sbin/getenforce;

Enhanced security is being enforced by SELinux!

Temporary Fix

So try this:

> sudo /usr/sbin/setenforce 0 ;
> /usr/sbin/getenforce;

Now log out and back in to notice that your .Xauthority file has been created and port forwarding will work!

Run ‘id’ and you’ll see the simple results you expect.

Permanent Change

The setenforce command does not require a server reboot but it’s also not going to survive a reboot.  To make the change permanent, ask your system admin to edit /etc/selinux/config to set “SELINUX=permissive”

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#       enforcing - SELinux security policy is enforced.
#       permissive - SELinux prints warnings instead of enforcing.
#       disabled - No SELinux policy is loaded.
# SELINUXTYPE= can take one of these two values:
#       targeted - Targeted processes are protected,
#       mls - Multi Level Security protection.






User Defined Target Properties


, , , , , ,

We can create our own target property classifications using EM CLI.  In this example we’ll create a new property named “Product Type”.  In my shop we’ll use that property to identify Oracle RAC and single instance databases, but also noSQL targets that we’re adding to our EM environment.

Here’s the syntax for creating the new target property:

emcli add_target_property -target_type=”*” -property=”Product Type”
Property “Product Type” added successfully

The new target property can now be applied with EM CLI using the syntax:

emcli set_target_property_value -property_records=”target_name:target_type:property_name:property_value”

emcli set_target_property_value property_records=”SPECIAL_DEMO_2:rac_database:Product Type:OracleRAC”;
Properties updated successfully

That’s cool, but now I need to apply it to >500 targets, right?  Let’s build the CLI statement from a SQL query against the SYSMAN schema:

’emcli set_target_property_value -property_records=”‘
|| target_name
|| ‘:’
|| target_type
|| ‘:Product Type:OracleRAC”;’
FROM sysman.mgmt$target_properties
WHERE target_type LIKE ‘rac_database’
AND target_name NOT IN ( SELECT target_name
FROM sysman.mgmt$target_properties
WHERE property_name = ‘udtp_1’
AND property_value = ‘OracleRAC’ );

Notice that the query is filtered by the property_value (OracleRAC) and a property_name of udtp_1!   Oracle has assigned a generic-looking name to our custom user defined target property name, just as they use orcl_gtp* for OEM-defined properties.  These abstractions are only used in the repository and within the OEM application itself.  The user-readable property name appears in the console.

Test the new property by running the CLI command for a single target and then execute the subquery  (containing the udtp_1 property_name above) against the OMR.   Confirm that it’s performing as expected by looking up the target’s properties in the console.

Hang onto that SQL query after your deployment to periodically label new targets as they’re added to your environment.  I have several SQL queries like this posted to a single Information Publisher report (surprise!) that I mail to myself nightly.  You can also set up an OEM Job to execute the CLI commands for you.


OEM Preferred Connect Strings


, , , ,

You can specify a connection string for database targets in two ways:  fill in the blanks and let OEM create the connection string (my preference) or paste a Preferred Connect String on the bottom of the page.


Our default setting in our shop is to use multi-threaded servers. On the rare occasion when we need to change if, we paste the connect string from tnsnames.ora in place and add the SERVER=DEDICATED parameter.

Some things to keep in mind:

  • The Preferred Connect String takes precedence over strings OEM might generate from the individual entries
  • The string should start with “DESCRIPTION=”
  • Keep those darn () straight.  The easiest way to do that is to paste the string into an editor that supports syntax highlighting


How to Share OEM Named Credentials


, , , , , , ,

Named Credentials

Named credentials (NC) are an excellent way to support the work performed by OEM users/administrators without requiring them to actually know key passwords. Named credentials also provide a single location to update when the passwords change.

The Problem

Named credentials can only be shared with individual users/administrators inside OEM. You can’t grant the privilege to an EM role and expect the privilege to propagate the role members like you can with a similar database role.  NC’s are an internal OEM construct and not database objects.

Our need for an automated solution to manage our named credentials arose as we started using OEM for provisioning and patching. We have a group of named credentials owned by our SYSMAN account – some to share and many to use internally. At first I made the grants to individuals as needed through the console but as the number of users increased (yay) so did the overhead associated the named credentials (boo).  More importantly we also needed a quick way to share new named credentials with several EM administrators and revoke access to others.


This may appear to be a complex solution to this problem, but it consists of four simple parts:

  1. Build a control table in the OMR containing a complete list of SYSMAN-owned named credentials with a flag indicating whether each NC should be shared.
  2. Populate that table from user and NC data stored elsewhere in the SYSMAN schema.
  3. Use SQL to build EM CLI argfiles to grant and revoke access.
  4. Wrap those SQL and CLI commands into a shell script.
  5. Build an OEM Job to perform those updates on a regular basis

Step 1 – Build the Control Table

The first step in managing share Named Credentials is to catalog them and define which will be shared through this process. I created a table named shared_named_credentials in my work-horse SCOOP schema with three columns:

  • CRED_NAME is the name of the credential stored in the OMR
  • GUID is the global unique identifier for the NC
  • SHARED_YN is the flag you set to either share or hide it. As you can see from the DDL, the default value is ‘Y’ – all newly created NC’s will be shared unless you change their flag manually.
   GUID       RAW(16)                    NOT NULL,
   SHARE_YN   CHAR(1 BYTE)               DEFAULT 'Y'



Step 2 – Populate the Control Table

Information about Named Credentials is stored in OMR table sysman.em_nc_creds. I try to use views whenever possible but this time we can’t avoid accessing the table directly.

As you can see, it’s a very simple query to gather data for an INSERT statement.

Note:  I’ve used the cred_guid in this model to avoid problems with renamed NC’s.

INSERT INTO scoop.shared_named_credentials
 ( SELECT cred_name,
     FROM   sysman.em_nc_creds
     WHERE  cred_owner = 'SYSMAN'
            AND cred_guid NOT IN ( SELECT guid
                                 FROM   scoop.shared_named_credentials ) );

Step 3 – Generate EM CLI statements

The SQL statements below will be spooled into EM CLI argfiles at run-time. The first statement will generate the GRANT privileges for EM administrators with a specific role (LOCAL_DBA_ROLE in these examples).

         'grant_privs -name="'
         || u.user_name
         || '" -privilege="GET_CREDENTIAL;CRED_NAME='
         || c.cred_name
         || ':CRED_OWNER=SYSMAN";'
         as cli_command
FROM     sysman.gc_user_roles u,
         scoop.shared_named_credentials c,
         sysman.mgmt_priv_grants a
WHERE    u.role_name IN ( 'LOCAL_DBA_ROLE' )
         AND UPPER( c.share_yn ) = 'Y'
ORDER BY cli_command;

The second statement revokes existing NC privileges your users may have received based on the share_yn flag in the control table.

       'revoke_privs -name="'
       || a.grantee
       || '" -privilege="GET_CREDENTIAL;CRED_NAME='
       || b.cred_name
       || ':CRED_OWNER=SYSMAN";'
       as cli_command
FROM   sysman.mgmt_priv_grants a,
       scoop.shared_named_credentials b
WHERE  a.priv_name = 'GET_CREDENTIAL'
       AND a.guid = b.guid
       AND a.grantee IN ( SELECT user_name
                          FROM   sysman.gc_user_roles
                          WHERE  role_name IN ( 'LOCAL_DBA_ROLE' ) )
       AND b.share_yn <> 'Y';

Step 4 – Wrap all of it into a shell script

I’m a firm believer in encapsulating as much as possible into shell functions. It makes your run-time procedure easier to read and it segregates specific tasks for analysis and debugging.




Output Log

Setting the environment ...

Updating the list of named credentials owned by SYSMAN

0 rows created.

Commit complete.

Creating the argfile to revoke permissions


Creating the argfile to grant permissions


Login successful
Synchronized successfully

Applying grants

Privileges granted to user/role "JERRY" successfully
Privileges granted to user/role "JERRY" successfully
Privileges granted to user/role "PHIL" successfully
Privileges granted to user/role "PHIL" successfully
Privileges granted to user/role "BRENT" successfully

Revoking older grants

Privileges revoked from user/role "BOBBY" successfully
Privileges revoked from user/role "PHIL" successfully
Privileges revoked from user/role "MICKEY" successfully

Logout successful

Current grants

-------------------------------- --------------------------------

Step 6 – Deploy Using OEM Jobs

The Job development and deployment process is straight-forward.  The heavy lifting was already done with the shell script.

General Tab

There are four essential elements to complete on this screen

  1. Give the Job a name.
  2. Add a useful description.
  3. Select Target Type of Host
  4. Pick the host where the job will be executed.  The shell script requires SQL+ and EMCLI, both of which are installed on your management servers, so it makes sense to run the job on an OMS.


Parameters Tab

  1. Select Script as the Command Type.
  2. Paste the script into the OS Script box.  You can edit this script after the job has been saved when required (unlike Information Publisher reports).
  3. Name the Interpreter to be used


Credentials Tab

It may seems incestuous to use Named Credentials to manage Named Credentials.  If you feel that way you need to get a hobby.

Select the NC for the operating system account that owns your MW home.


Schedule Tab

I have two versions of this Job.  One is named simply UPDATE_NAMED_CREDENTIALS and the other has _IMMEDIATE tacked on to its name.  The only difference between the two is their schedule.  I kick off the IMMEDIATE job right after I make any change to SYSMAN’s NC’s.  The other one runs early every morning, local time.


Access Tab

You can grant permissions on this job to other OEM administrators.  I may do that someday but not right now.



There is some set-up and analysis required before this will work.  You need to build and populate the control table.  Before your first deployment job runs you’ll probably want to set the SHARE_YN flags.  Since the script handles both grants and revokes there is no harm in running the job a couple of times to get things set the way you want.

In this post I avoided specifics on two important elements:

  • You’ll need to decide how to manage the connect strings within the shell script.  I showed the simplest way in my illustrations.  The SYSMAN password is stored inside the job definition which is probably stored as a CLOB in the OMR so the risk is contained.  You might explore alternatives like passing variables into the script via the Job control system.
  • We grant a specific home-built OEM role for the administrators that will be using OEM for provisioning and patching.  You may choose to simply share these NC’s with all of your EM Administrators/users or base grants on existing OEM roles.  It’s your choice.


If/when we decide to expand this functionality to another set of EM Admins (if we choose to maintain another set of NC’s to share with DBAs from the Command center, for instance) another Y/N column could be added to the control table  and the SQL scripts updated to match.

Force OEM to Discover HA Systems


, , , ,

You know there’s a cluster and that means that an HA system exists, but OEM won’t discover it.  Of course that means that your RAC databases can’t be discovered either and all of this is making you cranky.  Take a deep breath.  We can fix this.

The agent can only recognize an HA system when it knows of the clusterware home.  For some reason the agents weren’t aware of the CRS_HOME on some of our Solaris hosts.

Here’s my solution.

  1. Determine the CRS_HOME with a quick ps -ef | grep lsnr
  2. Add that home to your environment settings with export CRS_HOME=/my/path/to/crshome
  3. Stop and start the agent to pick up the new information.

The agent will discover the clusterware home and report it back to the the management servers.

Run through your typical process in Add Targets Manually to save the HA system as a monitored target.


Come right back to Add Targets Manually to discover, configure, and add the cluster database using the Guided Process.

In some cases the Guided Process didn’t discover any database targets so I  ran through the dreaded Add Targets Declaratively by Specifying Target Monitoring Properties.  Web page titles like that are good indicators of their unpleasantness.  This one is no exception.



I found that configuration on that page is quicker if you limit yourself to just the RAC database and one of its instances.  Once that pair is saved, open the Monitoring Configuration page for your newly discovered RAC database and hit the Sync Cluster Database Instances button.  OEM will add the other instances to the list with no typing on your part.  Test the updated connections and save your work.

Reward yourself – you just earned it.




Resolve ‘Device or resource busy’ problems


Upgrading opatch utility consists of two steps

  1. Remove the old copy of opatch by deleting $OH/OPatch directory
  2. Unzip the replacement file into your Oracle home directory

But when you try to remove the contents of OPatch directory you get slammed with errors like this:

rm: cannot remove `jlib/.nfs0000000005213282002649ad’: Device or resource busy
rm: cannot remove `jlib/.nfs000000000523b2e4002649ae’: Device or resource busy

rm: cannot remove `ocm/lib/.nfs0000000005226a87002649b3′: Device or resource busy
rm: cannot remove `ocm/lib/.nfs0000000005226a88002649b4′: Device or resource busy


The lsof utility lists running processes associated with a directory if you use this syntax:

lsof +D <directory name>



oms > pwd

oms > rm -rf *
rm: cannot remove `jlib/.nfs0000000005213282002649ad’: Device or resource busy
rm: cannot remove `jlib/.nfs000000000523b2e4002649ae’: Device or resource busy
rm: cannot remove `jlib/.nfs000000000523b2e5002649af’: Device or resource busy
rm: cannot remove `jlib/.nfs000000000524eaad002649b0′: Device or resource busy
rm: cannot remove `jlib/.nfs000000000524eaae002649b1′: Device or resource busy
rm: cannot remove `ocm/lib/.nfs0000000005226a87002649b3′: Device or resource busy
rm: cannot remove `ocm/lib/.nfs0000000005226a88002649b4′: Device or resource busy

oms > lsof +D oplan
java 3520 oracle mem REG 0,22 8760 83713032 oplan/jlib/.nfs0000000004fd5c08003d981f
java 3520 oracle mem REG 0,22 18675 83713035 oplan/jlib/.nfs0000000004fd5c0b003d9821
java 3520 oracle mem REG 0,22 6861 83713028 oplan/jlib/.nfs0000000004fd5c04003d981c

oms > kill -9 3520

oms > lsof +D oplan

< Nothing returned >
oms > pwd

oms > rm -rf *

oms > cd ../

oms > rmdir OPatch

oms > unzip /shared/oem_patches/opatch/

OMS Configuration Backup


, ,

The Job system in OEM provides a flexible and reliable means of scheduling custom activities. This can be particularly handy for managing OEM itself.

Backing up the configuration of your management server is essential for recovery and the Job system is great tool for scheduling it.

Job Creation

Start by creating a Job through Enterprise | Job | Job Library.  Select OS Command from the Create Library Job dropdown.


On the General tab give the job a name and description, then associate it with the OMS server hosting the Admin Server for your cluster


On the Parameters tab

  • Select Script as the Command Type
  • Create a script in the OS Script block from this sample, inserting your own paths and sysman password.  The directory referenced as BU_DIR should be on your shared drive containing the Software Library
  • Enter your command interpreter (/bin/bash in this example)


Associate the job with your appropriate named credential on the Credentials tab.

Create an initial schedule in the Schedule tab. We’ll come back to this page when we run a test later.


Save your new job to the library


Test It

You’ll be returned to the Job Library page.  Click the radio button for your new job and press the Submit button


Go to the Schedule tab and select One Time (Immediate).


Click-through on the job name (OMS BACKUP 1.0 in this example) to follow your progress




  1. Visit the OMS server and verify that file was created where you expected it.  Do this again in a few days to verify that the job is running as scheduled
  2. Check back with the Job Activity  page in a couple days.
  3. Create another job to clean up older backup files



Implement VM Virtual Box for OEM 13c



Oracle has released a nicely packaged, preconfigured OEM 13c environment for us to play with inside Oracle Virtual Box.  The VM comes with an installed repository database, a management server, and a local agent.  Great starting place.

Download the zip files from


Lets walk through items c-f since the instructions aren’t super clear.

Item c:  Uncheck ‘Programs’ when edelivery brings up this page


Item d:  Start typing Oracle VM Virtual Box for Oracle Enterprise Manager Cloud Control until the list appears.  Select the item highlighted below.


Item e is extraneous, since the platform is already selected for you.  Select the latest release on the next screen (there’s only one release as I write this post).

Review and accept the license agreement


We’ve finally arrived at the download page.  Hit the Download All button and find something else to do for a while.  The Download Manager will restart downloads for you if your local machine goes to sleep.


I created a separate download folder for this download, and it turned out to be handy for the next step which is unzip all the downloaded files.


Being conscientious professionals, we’ll start by reading the README file.  I pasted it here so you’d have an idea what to expect.  In a nutshell:

  1. Combine all the ova files into one file
  2. Import the appliance
  3. Start it and use it.  Passwords for everything is welcome1

BTW:  I increased the VM’s memory to 10GB to make the VM run a little better.

The first thing I did when I opened the VM was create a desktop shortcut to a terminal app, in this case it’s Konsole.  I’m not a KDE user, so I guessed my way through it:

  1. Right click on the desktop
  2. Create New | Link to Application
  3. Hit the Application tab, then browse to /usr/bin to find konsole

You can also got through the Launcher, find the application in the list, and create a shortcut from in there.  That’s what I did to get Mozilla on the desktop.

Open that terminal and execute ./ to get it all started.  Open a second terminal and run top to see how hard this VM has to work to get things started.

One last thing:  After you’ve started the VM, run first and then run to get a nice, clean start on everything including the OMR.