Recently I had to clone one schema from Oracle Database 10.2.0.3 on HP-UX to 10.2.0.4 on Linux. The first problem came when I tried to export it and got the following errors:
UDE-00008: operation generated ORACLE error 31623
ORA-31623: a job is not attached to this session via the specified handle
ORA-06512: at "SYS.DBMS_DATAPUMP", line 2315
ORA-06512: at "SYS.DBMS_DATAPUMP", line 3185
ORA-06512: at line 1
These gave me no sign of what could be the problem, but when I tried to export only the metadata I got ORA-00600 which helped me finding the solution:
Tue May 29 12:21:13 2012
Errors in file /oracle/admin/brego/udump/brego_ora_21785.trc:
ORA-00600: internal error code, arguments: [kwqbgqc: bad state], [1], [1], [], [], [], [], []
Tue May 29 12:22:01 2012
kwqiclode Warning : 600 happened during Queue load
The error was caused by invalid DataPump queue, refer to the following MOS notes to resolve the error:
DataPump Import (IMPDP) Fails With Errors UDI-8 OCI-22303 ORA-600 [kwqbgqc: bad state] [ID 1086334.1]
Errors ORA-31623 And ORA-600 [kwqbgqc: bad state] During DataPump Export Or Import [ID 754401.1]
These days I’m implementing Oracle Dataguard for two Oracle databases 10.2 as part of disaster recovery project, one of them is around 1.7TB, not yet production. As part of the DG setup backups have to be available for both primary and standby. I preferred to use ASM and was able to negotiate with the storage admin to run storage replication for the FRA disks during the backup. This way I would have the same structure and files, locally at the disaster site immediately after the backup of the primary database is completed.
Unfortunately two weeks passed and by the time (this morning) I had to start with the DG setup the FRA (2TB big) got exhausted, because of too many archivelogs. When I saw this the first thing it came to my mind was to delete the backup as it was too big and I already had it at the disaster site!?. Deleting archivelogs was not an option as I need these archivelogs so the standby could catch up with the primary. So what I’ve done was to delete the backup without thinking and then moved on to duplicate the primary database for standby.
I’ve setup the standby instance and when issued “DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK DORECOVER;” I got the following errors:
RMAN-03002: failure of Duplicate Db command at 06/05/2012 11:12:04
RMAN-03015: error occurred in stored script Memory Script
RMAN-06136: ORACLE error from auxiliary database: ORA-01180: can not create datafile 1
ORA-01110: data file 1: '+DATA/orcl/datafile/system.347.666704609'
I was surprised by this error and started thinking if there wasn’t a problem with the storage (it happened twice before to have read only disks), but this was not the case. Once the diskgroup is mounted it means that ASM can read/write to its disks.
It’s obvious what my mistake is, but it took me a while until I realize what I’ve done. Deleting backup at the primary database means that it no longer knows for my backup and for that reason I could not clone the primary database. That’s why I got error that datafile 1 could not be created, simply it has no backups to restore it from. Thinking now what if the backup was located at NFS share, then I would definitely not delete it, but maybe move the archivelogs and manually register them later on the standby.
So now I started a new backup, waiting for it to finish and got replicated to the disaster site.
Recently I was given three virtual machines running Oracle Enterprise Linux 5 and Oracle 11gR2 RAC on Oracle VM 2.2.1, copied straight from /OVS/running_pool/. I had to get these machines up and running at my lab environment, but I found hard to setup the network. I’ve spent half day in debugging without success, but finally found a workaround, which I’ll explain here.
Just a little technical notes – Oracle VM (xen) has three main setup configurations within /etc/xen/xend-config.sxp:
Bridge Networking – this configuration is configured by default and it’s simplest to configure. Using this type of networking means that the VM guest should have IP from the same network as the VM host. Another thing is that the VM guest could take advantage of DHCP, if any. The following lines should be uncommented in /etc/xen/xend-config.sxp: (network-script network-bridge)
(vif-script vif-bridge)
Routed Networking with NAT – this configuration is most common where a private LAN must be used, for example you have a VM host running on your notebook and you can’t get another IP from corporate or lab network. For this you have to setup private LAN and NAT the VM guests so they can access the rest of the network. The following lines should be uncommented in /etc/xen/xend-config.sxp: (network-script network-nat)
(vif-script vif-nat)
Two-way Routed Network – this configuration requires more manual steps, but offers greater flexibility. This one is exactly the same at the second one, except the fact that VM guests are exposed on the external network. For example when VM guest make connection to external machine, its original IP is seen. The following lines should be uncommented in /etc/xen/xend-config.sxp: (network-script network-route)
(vif-script vif-route)
Typically only one of the above can be used at one time and selection and choice depends on the network setup. For second and third configurations to work, a “route” must be added to the Default Gateway. For example if my Oracle VM host has an IP address 192.168.143.10, then on the default gateway (192.168.143.1) a route has to be added to explicitly route all connection requests to my VM guests through my VM host. Something like that:
route add -net 10.0.1.0 netmask 255.255.255.0 gw 192.168.143.10
Now back to the case itself. Each of the RAC nodes had two NICs – one for the public connections and one for the private, which is used by GI an RAC. The public network was 10.0.1.X and private 192.168.1.X. What I wanted was to run the VM guests at my lab and access them directly with IP addresses from the lab network, which was 192.168.143.X. As we know the default network configuration is to use bridged networking so I went with this one. Having the vm guests config files all I had to do was to change the first address of every guest:
To:
vif = ['mac=00:16:3e:22:0d:04, ip=192.168.143.151, bridge=xenbr0', 'mac=00:16:3e:22:0d:14, ip=192.168.1.11',]
This turned to be real nightmare, I’ve spent half a day looking why my VM gusts doesn’t have access to the lab network. They had access to VM host, but not to the outside world. Maybe because I’m running Oracle VM on top of VMWare, but finally I gave up this configuration.
Thus I had to use one of the other two network configurations – Routed Networking with NAT OR Two-way Routed Network. Either case I didn’t have access to the default gateway and would not be able to put static route to my VM guests.
Here is how I solved this – to run three nodes RAC on Oracle VM Server 2.2.1, keep their original network configuration and access them with IP address from my lab network (192.168.143.X). I’ve put logical IP’s of the VM guests on the VM host using ip (ifconfig could also be used) and then using iptables change packet destination to the VM guests themselves (10.0.1.X).
1. Change Oracle VM configuration to Two-way Routed Network, comment the lines for default bridge configuration and remove comments for routed networking:
(network-script network-route)
(vif-script vif-route)
2. Configure VM host itself for forwarding:
echo 1 > /proc/sys/net/ipv4/conf/all/proxy_arp
iptables -t nat -A POSTROUTING -s 10.0.1.0 -j MASQUERADE
3. Set network alias with the IP address that you want to use for the VM guests:
ip addr add 192.168.143.151/32 dev eth0:1
ip addr add 192.168.143.152/32 dev eth0:2
ip addr add 192.168.143.153/32 dev eth0:3
4. Create iptables rules in PREROUTING chain that will redirect the request to VM guests original IPs once it receive it on the lab network IP:
iptables -t nat -A PREROUTING -d 192.168.143.151 -i eth0 -j DNAT –to-destination 10.0.1.11
iptables -t nat -A PREROUTING -d 192.168.143.152 -i eth0 -j DNAT –to-destination 10.0.1.12
iptables -t nat -A PREROUTING -d 192.168.143.153 -i eth0 -j DNAT –to-destination 10.0.1.13
5. Just untar the VM guest in /OVS/running_pool/
[root@ovm22 running_pool]# ls -al /OVS/running_pool/dbnode1/
total 26358330
drwxr-xr-x 2 root root 3896 Aug 6 17:27 .
drwxrwxrwx 6 root root 3896 Aug 3 11:18 ..
-rw-r–r– 1 root root 2294367596 May 16 17:27 swap.img
-rw-r–r– 1 root root 4589434792 May 16 17:27 system.img
-rw-r–r– 1 root root 20107128360 May 16 17:27 u01.img
-rw-r–r– 1 root root 436 Aug 6 11:20 vm.cfg
6. Run the guest:
xm create /OVS/running_pool/dbnode1/vm.cfg
Now I have a three node RAC, nodes have their original public IPs and I can access them using my lab network IPs. The mapping is like this:
Request to 192.168.143.151 –> the IP address is up on the VM host –> on the VM host iptables takes action –> packet destination IP address is changed to 10.0.1.11 –> static route is already in place at VM host routing packet to the vif interface of the VM guest.
Now I can access my dbnode1 (10.0.1.11) directly with its lab network IP 192.168.143.151.
This is short guide on how to run standalone Oracle APEX Listener 2.0 beta with Oracle 11g XE. I’m using Oracle Enterprise Linux 5.7 for running Oracle APEX Listener and Oracle Database 11g XE with APEX 4.1.1. Although running APEX Listener standalone is not supported I’m using it to run several internal applications for company needs.
When using APEX Listener with Oracle XE, APEX won’t work properly and white screen appears when APEX is open. This is because the APEX images are stored in the XML DB repository, but APEX Listener have to be run with parameter –apex-images pointing to directory containing the images at the filesystem. To solve this I downloaded the latest patch of APEX and copied the images from the patch.
If you have another database running on the same machine, keep in mind this.
Install Oracle 11g XE and update Oracle APEX to latest version:
3. Unlock and set password for apex_public_user at the Oracle XE database:
alter user APEX_PUBLIC_USER account unlock;
alter user APEX_PUBLIC_USER identified by secret;
4. Patch Oracle APEX to support RESTful Services:
cd /oracle/apxlsnr/apex_patch/
sqlplus / as sysdba @catpatch.sql
Set passwords for both users APEX_LISTENER and APEX_REST_PUBLIC_USER.
Now this is tricky, for XE edition the images are kept in the XML DB repository, so images have to be copied from the patch to the listener home:
cp /tmp/patch/images .
In case you want to run it in background here’s how to do it:
nohup java -jar apex.war standalone –apex-images /oracle/apxlsnr/images > apxlsnr.log &
Periodically I was seeing exceptions like these:
ConnectionPoolException [error=BAD_CONFIGURATION]
…
Caused by: oracle.ucp.UniversalConnectionPoolException: Universal Connection Pool already exists in the Universal Connection Pool Manager. Universal Connection Pool cannot be added to the Universal Connection Pool Manager
I found that if APEX Listener is not configured with RESTful Services then these messages appeared in the log and could be safety ignored.
Year ago I installed Weblogic server + Oralce database server for a customer. Few months later my colleagues asked me whether something has changed in the environment or if something happened at the data center, because they started to see Java exceptions for unknown hostname for a web service they were calling and this was happening only from time to time. We’ve checked the firewall rules, DNS and all the stuff, but everything seemed to be working fine. The only solution by that time they came out was to add the hostname to the hosts file of the Weblogic server. These were versions of the software:
Oracle Enterprise Linux 6.2 (64bit)
Weblogic Server 10.3.5
JDK 1.6.0_31
Then six months later, problem showed up again. The new version of the application was calling another web service, which obviously was missing from the hosts file and this time I decided to investigate the problem and find out what’s really happening. After I received the email I immediately logged in to the server and fired several nslookups and ping requests to the host that was causing problems, both were successful and returned correct result. I’ve double checked hosts file, nsswitch.conf file and all the network settings, everything was correct. Meanwhile the Weblogic server log kept getting java.net.UnknownHostException for the very same host.
Obviously the problem required different approach. I’ve found useful Java procedure to call the function getByName and in some way simulate the application behavior, webservice hostname was intentionally changed, this is the procedure:
[root@srv tmp]# cd /tmp/
cat > DomainResolutionTest.java
java.net.InetAddress;
import java.net.UnknownHostException;
import java.io.PrintWriter;
import java.io.StringWriter;
public class DomainResolutionTest {
public static void main(String[] args) {
if (args.length == 0) args = new String[] { "sve.to" };
try {
InetAddress ip = InetAddress.getByName(args[0]);
System.out.println(ip.toString());
}catch (UnknownHostException uhx) {
System.out.println("ERROR: " + uhx.getMessage() + "\n" + getStackTrace(uhx));
Throwable cause = uhx.getCause();
if (cause != null) System.out.println("CAUSE: " + cause.getMessage());
}
}
public static String getStackTrace(Throwable t)
{
StringWriter sw = new StringWriter();
PrintWriter pw = new PrintWriter(sw, true);
t.printStackTrace(pw);
pw.flush();
sw.flush();
return sw.toString();
}
}
Running the procedure several times returned the correct address and no error occurred, but looping the procedure for some time returned the exception I was looking for:
while 1>0; do java DomainResolutionTest; done > 1
^C
[root@srv tmp]# wc -l 2
2648 2
[root@srv tmp]# grep Unknown 2
java.net.UnknownHostException: sve.to
[root@srv tmp]# less 2
......
sve.to/95.154.250.125
ERROR: sve.to
java.net.UnknownHostException: sve.to
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:849)
at java.net.InetAddress.getAddressFromNameService(InetAddress.java:1202)
at java.net.InetAddress.getAllByName0(InetAddress.java:1153)
at java.net.InetAddress.getAllByName(InetAddress.java:1083)
at java.net.InetAddress.getAllByName(InetAddress.java:1019)
at java.net.InetAddress.getByName(InetAddress.java:969)
at DomainResolutionTest.main(DomainResolutionTest.java:12)
sve.to/95.154.250.125
....
From what I understand is that Java process is trying to lookup the IP address of the requested host, but first it takes all IP address of the local interfaces (eth0 and lo) including their default IPV6 address and then try to resolve these IP addresses to hostnames. Although I didn’t configured IPV6 addresses for the interfaces, they already had default ones. This is because the OS had IPV6 enabled by default and respectively the interfaces got default IPV6 addresses. During the installation I removed localhost6 (::1) record from hosts file, which later caused this error and also missing record for eth0 IP address.
The problem may be that the JVM performs both IPv6 and IPv4 queries and if the DNS server is not configured to handle IPv6 queries, the application might issue an unknown host exception. If the DNS is not configured to handle IPv6 queries properly, the application must wait for the IPv6 query to time out. The workaround for this is to make Java use only IPV4 stack and run Java process with -Djava.net.preferIPv4Stack=true parameter. This will make Java process to prefer running in IPV4, thus avoiding the error for IPV6 look up. Unfortunately running the above Java procedure with this parameter again returned UnknownHostException.
It looks like genuine bug with IPv6 in Java, I also saw few bugs opened at Sun regarding this Java behavior, but there was no solution. Finally after adding hostname for local interface’s IPV6 addresses in host file, the exceptions disappeared:
::1 localhost6
fe80::20c:29ff:fe36:4144 srv6
So for the future installations I’ll be explicitly disabling IPV6 on the installed systems. The easiest way to do that is like this:
cat >> /etc/sysctl.conf
#disable all ipv6 capabilities on a kernel level
sysctl net.ipv6.conf.all.disable_ipv6 = 1
During Exadata project I had to put some order and tidy up the Enterprise Manager targets. I’ve decided to discover all targets, promote the one which are missing and delete old/stale one. I’ve got strange error and decided to share it if someone hit it. I’m running Oracle Enterprise Manager Cloud Control 12c Release 2.
When you try to run auto discovery from the console for a host you almost immediately get the following error:
Run Discovery Now failed on host oraexa201.host.net: oracle.sysman.core.disc.common.AutoDiscoveryException: Unable to run on discovery on demand.RunCollection: exception occurred: oracle.sysman.gcagent.task.TaskPreExecuteCheckException: non-existent, broken, or not fully loaded target
When review the agent log the following exception could be seen:
tail $ORACLE_HOME/agent/sysman/log/gcagent.log
2013-07-16 11:26:06,229 [34:2C47351F] INFO - >>> Reporting exception: oracle.sysman.emSDK.agent.client.exception.NoSuchMetricException: the DiscoverNow metric does not exist for host target oraexa201.host.net (request id 1) <<<
oracle.sysman.emSDK.agent.client.exception.NoSuchMetricException: the DiscoverNow metric does not exist for host target oraexa201.host.net
Got another error message during my second (test) run:
2013-08-02 15:20:47,155 [33:B3DBCC59] INFO - >>> Reporting response: RunCollectionResponse ([DiscoverTargets : host.oraexa201.host.net oracle.sysman.emSDK.agent.client.exception.RunCollectionItemException: Metric evaluation failed : RunCollection: exception occurred: oracle.sysman.gcagent.task.TaskPreExecuteCheckException: non-existent, broken, or not fully loaded target @ yyyy-MM-dd HH:mm:ss,SSS]) (request id 1) <<<
Although emctl status agent shows that last successful heartbeat and upload are up to date, still you cannot discover targets on the host.
This is caused by the fact that the host is under BLACKOUT!
1. Through the console end the blackout for that host:
Go to Setup -> Manager Cloud Control -> Agent, find the agent for which you experience the problem and click on it. Then you clearly can see that the status of the
agent is “Under Blackout”. Simply select Agent drop down menu – > Control and then End Blackout.
2. Using emcli, first login, list blackout and then stop the blackout:
[oracle@em ~]$ emcli login -username=sysman
Enter password
Login successful
[oracle@em ~]$ emcli get_blackouts
Name Created By Status Status ID Next Start Duration Reason Frequency Repeat Start Time End Time Previous End TZ Region TZ Offset
test_blackout SYSMAN Started 4 2013-08-02 15:06:43 01:00 Hardware Patch/Maintenance once none 2013-08-02 15:06:43 2013-08-02 16:06:43 none Europe/London +00:00
List of target which are under blackout and then stop the blackout:
[oracle@em ~]$ emcli get_blackout_targets -name="test_blackout"
Target Name Target Type Status Status ID
has_oraexa201.host.net has In Blackout 1
oraexa201.host.net host In Blackout 1
TESTDB_TESTDB1 oracle_database In Blackout 1
oraexa201.host.net:3872 oracle_emd In Blackout 1
Ora11g_gridinfrahome1_1_oraexa201 oracle_home In Blackout 1
OraDb11g_home1_2_oraexa201 oracle_home In Blackout 1
agent12c1_3_oraexa201 oracle_home In Blackout 1
sbin12c1_4_oraexa201 oracle_home In Blackout 1
LISTENER_oraexa201.host.net oracle_listener In Blackout 1
+ASM_oraexa2-cluster osm_cluster In Blackout 1
+ASM4_oraexa201.host.net osm_instance In Blackout 1
[oracle@em ~]$ emcli stop_blackout -name="test_blackout"
Blackout "test_blackout" stopped successfully
And now when the discovery is run again:
Run Discovery Now – Completed Successfully
I was unable to get an error initially when I set the blackout, but then got the error after restarting the EM agent.
22 Oct 2013 Update:
After update to 12c (described here) now meaningful error is raised when you try to discover targets during agent blackout:
Run Discovery Now failed on host oraexa201.host.net: oracle.sysman.core.disc.common.AutoDiscoveryException: Unable to run on discovery on demand.the target is currently blacked out
Just a quick wrap up on EM12cR3 upgrade. I have to say that I was pleasantly surprised that everything went so smooth. I didn’t expected anything else, but with so how many products and components we have there I got few things in mind. The version I got was already upgraded to 12.1.0.2 so it was really easy for me to run the upgrade.
Things to watch out for:
- You need to be already running OEM version 12.1.0.2 to be able to upgrade to 12.1.0.3. If not you must apply BP1 to your 12.1.0.1 installation and then patch to 12.1.0.3. Remember to patch the agents as well.
- The upgrade to 12.1.0.3 is out-of-place upgrade, so you need to point out to new middleware home and you’ll need additional 15 GB for the installation.
- The installation takes between 1-2 hours to complete depending on you machine power.
- I didn’t stopped any of the agents during the upgrade.
- After the upgrade all OMS components were started automatically.
Here is what I’ve done:
1. Definitely take backup of the middleware home and database as well. You don’t want to end up removing the agents and reinstalling the OMS. I had 400+ targets and failure wasn’t an option. For the middleware home I used simple tar and RMAN for the repository database.
2. Stop the OMS and other components:
cd $ORACLE_HOME/bin/
cd bin
./emctl stop oms -all
3. It’s required that the EMKey be copied to the repository prior upgrade, if you miss that the installer will kindly remind you. There is also note in the documentation:
Recently I’m doing a lot of OEM stuff for a customer and I’ve decided to tidy and clean up a little bit. The OEM version was 12cR2 and it was used for monitoring few Exadata’s and had around 650 targets. I planned to upgrade OEM to 12cR3, upgrade all agents and plugins, promote any un-promoted targets and delete any old/stale targets, at the end of the day I wanted up to date version of OEM and up to date information about the monitored targets.
Upgrade to 12cR3 went easy and smooth, described here in details, upgrade of agents and plugins went fairly easy as well. After some time everything was looking fine, but I wanted to be sure I didn’t missed anything so from one thing to another I found EMDIAG Repository Verification utility. It’s something I heard Julian Dontcheff mentioning for the first time on one of the BGOUG conferences and it’s something I was always looking to try.
So in series of posts I will describe installation of emdiag kit and how I fixed some of the problems I faced with my OEM repository.
What is EMDIAG kit
Basically EMDIAG consists of three kits:
- Repository verification (repvfy) – extracts data from the repository and run series of tests against the repository to help diagnose problems with OEM
- Agent verification (agtvfy) – troubleshoot problems with OEM agents
- OMS verification (omsvfy) – troubleshoot problems with the OEM management repository service
In this and following posts I’ll be referring to the first one – repository verification kit.
EMDIAG repvfy kit consist of set of tests which are run against OEM repository to help EM administrator to troubleshoot, analyze and help resolve OEM related problems.
The kit uses a shell and perl script as wrapper and a lot SQL scripts to run different tests and collect information from the repository.
It’s good to know that the emdiag kit has been around for some quite long time and it’ is available also for Grid Control 10g and 11g and DB Control 10g and 11g. This posts refer to the EMDIAG Repvfy 12c kit which can be installed only in a Cloud Control Management Repository 12c. Also emdiag kit repvfy 12c will be included in the RDA Release 4.30.
Apart from finding and resolving specific problem with emdiag kit it would be a good practice to run it at least once per week/month to check for any new problems that are reported.
Installing EMDIAG repvfy kit
Installation if pretty simple and straight forward, first download EMDIAG repvfy 12c kit from following MOS note:
EMDIAG REPVFY Kit for Cloud Control 12c – Download, Install/De-Install and Upgrade (Doc ID 1426973.1)
Just set your ORACLE_HOME to the database hosting the Cloud Control Management Repository and create new directory emdiag where the tool will be unziped:
[oracle@em ~]$ . oraenv
ORACLE_SID = [oracle] ? GRIDDB
The Oracle base for ORACLE_HOME=/opt/oracle/product/11.2.0/db_1 is /opt/oracle
[oracle@em ~]$ cd $ORACLE_HOME
[oracle@em db_1]$ mkdir emdiag
[oracle@em db_1]$ cd emdiag/
[oracle@em emdiag]$ unzip -q /tmp/repvfy12c20131008.zip
[oracle@em emdiag]$ cd bin
[oracle@em bin]$ ./repvfy install
Please enter the SYSMAN password:
...
...
COMPONENT INFO
-------------------- --------------------
EMDIAG Version 2013.1008
EMDIAG Edition 2
Repository Version 12.1.0.3.0
Database Version 11.2.0.3.0
Test Version 2013.1015
Repository Type CENTRAL
Verify Tests 496
Object Tests 196
Deployment SMALL
[oracle@em ~]$
And that’s it, emdiag kit for repository verification is now installed and we can start digging into OEM repository. With next post we’ll get some knowledge on the basics and commands used for verification and diagnostics.
I have to say that the error didn’t came up by itself, but it manifested after I had to redeploy the Exadata plugin on few agents. If you ever had to do this you would know that before removing the plugin from an agent you need to make sure the agent is not primary monitoring agent for Exadata targets. In my case few of the agents were Monitoring Agents for the cells and I had to swap them with the Backup Monitoring Agent so I would be able to redeploy the plugin on the primary monitoring agent.
After I redeployed the plugin, I tried to revert back the initial configuration but for some reason the configuration messed up and I ended up with different agents monitoring different cell targets from what was at the beginning.
It turns out that one of the monitoring agents wasn’t able to connect to the cell and that’s why I got the email notifications and the Metric evaluation errors for the cells. Although that’s not a problem it’s quite annoying to receive such alerts and have all these targets with Metric collection error icons in OEM or having these targets reported with status Down.
Let’s first check which are the monitoring agents for that cell target from the OEM repository:
Looking on the cell secure log we can see that one of the monitoring agents wasn’t able to connect because of failed publickey authentication:
Oct 23 11:39:54 exacel05 sshd[465]: Connection from 10.141.8.65 port 14594
Oct 23 11:39:54 exacel05 sshd[465]: Failed publickey for cellmonitor from 10.141.8.65 port 14594 ssh2
Oct 23 11:39:54 exacel05 sshd[466]: Connection closed by 10.141.8.65
Oct 23 11:39:55 exacel05 sshd[467]: Connection from 10.141.8.66 port 27799
Oct 23 11:39:55 exacel05 sshd[467]: Found matching DSA key: cf:99:0a:37:1a:e5:84:dc:a8:8a:b9:6f:0c:fd:05:c5
Oct 23 11:39:55 exacel05 sshd[468]: Postponed publickey for cellmonitor from 10.141.8.66 port 27799 ssh2
Oct 23 11:39:55 exacel05 sshd[467]: Found matching DSA key: cf:99:0a:37:1a:e5:84:dc:a8:8a:b9:6f:0c:fd:05:c5
Oct 23 11:39:55 exacel05 sshd[467]: Accepted publickey for cellmonitor from 10.141.8.66 port 27799 ssh2
Oct 23 11:39:55 exacel05 sshd[467]: pam_unix(sshd:session): session opened for user cellmonitor by (uid=0)
That’s confirmed by checking ssh authorized_keys file, which also confirms which were initially configured monitoring agents:
Another way to check which monitoring agent were configured initially is to check the snmpSubscriber attribute for that specific cell:
[root@exacel05 ~]# cellcli -e list cell attributes snmpSubscriber
((host=exadb03.localhost.localdomain,port=3872,community=public),(host=exadb04.localhost.localdomain,port=3872,community=public))
It’s obvious that exadb02 shouldn’t be monitoring this target but it should be exadb04 instead. I believe that when I redeployed the Exadata plugin this agent wasn’t eligible to monitor Exadata targets any more and was replaced with another one but that’s just a guess.
There are two solutions for that problem:
1. Move (relocate) target definition and monitoring to the correct agent:
I wasn’t able to find a way to do that through OEM Console and for that purpose I used emcli. Based on MGMT$AGENTS_MONITORING_TARGETS query and snmpSubscriber attribute I was able to find which agent was configured initially and which have to be removed. Then I used emcli to relocate the monitoring agent for that target to the correct agent, the one which was configured initially:
[oracle@oem ~]$ emcli relocate_targets -src_agent=exadb02.localhost.localdomain:3872 -dest_agent=exadb04.localhost.localdomain:3872 -target_name=exacel05.localhost.localdomain -target_type=oracle_exadata -copy_from_src
Moved all targets from exadb02.localhost.localdomain:3872 to exadb04.localhost.localdomain:3872
2. Reconfigure the cell to use the new monitoring agent:
Add the current monitoring agent ssh publickey into the authorized_keys of the cell:
Place the oracle user DSA public key (/home/oracle/.ssh/id_dsa.pub) from exadb02 into exacel05:/home/cellmonitor/.ssh/authorized_keys
and also change the cell snmpSubscriber attribute:
Second post in the series of emdiag repvfy kit about the basics of the tool. Having the kit already installed in earlier post it is time now to get some basics before we start troubleshooting.
There are three main commands with repvfy:
verify – repository-wide verification
analyze - objects specific verification/analysis
dump – dump specific repository object
As you can tell from the description, the verify is run against the repository and doesn’t require any arguments by default while analyze and dump commands require specific object to be given. To get a list of all available commands of the kit run repvfy -h1.
Verify command
Let’s make something clear in the begging, the verify command will run repository-wide verification against many tests which are FIRST grouped into modules and SECOND categorized in several levels. To get a list of available modules run repvfy -h4, there are more than 30 modules and I won’t go into detail for each, but most useful are – Agents, Plugins, Exadata, Repository, Targets. The list of levels can be found at the end of the post, it’s important to say that levels are cumulative and by default tests are run in level 2!
When investigate or debug a problem with the repository always start with verify command. It’s a good starting point to run verify without any arguments, it will go through all modules and give you summary if certain problems (violations) are present and also get an initial look on the health of the repository and then start debugging specific problem.
So here is how verify output looked for my OEM repository:
The verify command can also be run with -detail argument to get more details for the problem found. It will also show which test found the problem and what actions can be taken to correct the problem. That’s useful for another reason – it will print the target name and guid which can be used for further analysis using analyze and dump commands.
The command can also be run with -level argument, starting with zero for a fatal errors and increasing to nine for more minor errors and best practices, list of levels can be found at the end of the post.
Analyze command
Analyze command is run against specific target which can be specific either by its name or its unique identifier (guid). To get a list of supported targets run repvfy -h5. The analyze command is very similar to the verify command except it is run against specific target. Again it can be run with -level and -detail arguments, like this:
[oracle@oem bin]$ ./repvfy analyze exadata -guid 6744EED794F4CCCDBA79EC00332F65D3 -level 9
Please enter the SYSMAN password:
-- --------------------------------------------------------------------- --
-- REPVFY: 2013.1008 Repository: 12.1.0.3.0 29-Oct-2013 12:00:09 --
---------------------------------------------------------------------------
-- Module: EXADATA Test: 0, Level: 9 --
-- --------------------------------------------------------------------- --
analyzeEXADATA
2001. Exadata plugin version mismatches: 1
6002. Exadata components without a backup Agent: 4
6006. Check for DB_LOST_WRITE_PROTECT: 1
6008. Check for redundant control files: 5
For that Exadata target we can see there are few more problems found with level 9 except the one found earlier about plugin version mismatch with level 2.
One of the next posts will be dedicated to troubleshooting and fixing problems in Exadata module.
Dump command
Dump command is used to dump all the information about specific repository object, as analyze command it expects either target name or target guid. For a list of supported targets run repvfy -h6.
I won’t show any example because it will dump all the details about that target – more than 2000 lines. If you run the dump command against the same target used in analyze you will get a ton of information like – associated targets with this Exadata (hosts, iloms, databases, instances), list of monitoring agents, plugin version, some address details, long list of targets alerts/warnings.
Seems rather useless because it just dumps a lot of information but actually it helped me identifying the problem I had about plugin version mismatch within the Exadata module.
Repository verification and object analysis levels:
0 - Fatal issues (functional breakdown)
These test highlight fatal errors found in the repository. These errors will prevent EM from functioning
normally and should get addressed straight away.
1 - Critical issues (functionality blocked)
2 - Severe issues (restricted functionality)
3 - Warning issues (potential functional issue)
These tests are meant as 'warning', to highlight issues which could lead to potential problems.
4 - Informational issues (potential functional issue)
These tests are informational only. They represent best practices, potential issues, or just areas to verify.
5 - Currently not used
6 - Best practice violations
These test highlight discrepancies between the known best practices, and the actual implementation
of the EM environment.
7 - Purging issues (obsolete data)
These test highlight failures to clean up (all the) historical data, or problems with orphan data
left behind in the repository.
8 - Failure Reports (historical failures)
These test highlight historical errors that have occurred.
9 - Tests and internal verifications
These tests are internal tests, or temporary and diagnostics tests added to resolve specific problems.
They are not part of the 'regular' kit, and are usually added while debugging or testing specific issues.
In the next post I’ll troubleshoot and fix the errors I had within the Availability module – Disabled response metrics.
For more information and examples refer to following notes:
EMDIAG Repvfy 12c Kit – How to Use the Repvfy 12c kit (Doc ID 1427365.1)
EMDIAG REPVFY Kit – Overview (Doc ID 421638.1)
I had quite an interesting case recently where I had to build stretch cluster for a customer using Oracle GI 12.1 and placing quorum voting disk on NFS. There is a document at OTN regarding the stretch clusters and using NFS as a third location for voting disk but it has information for 11.2 only as of the moment. Assuming there is no difference in the NFS parameters I used the Linux parameters from that document and mounted the NFS share on the cluster nodes.
Later on when I tried to add the third voting disk within the ASM disk group I got this strange error:
SQL> ALTER DISKGROUP OCRVOTE ADD QUORUM DISK '/vote_nfs/vote_3rd' SIZE 10000M /* ASMCA */
Thu Nov 14 11:33:55 2013
NOTE: GroupBlock outside rolling migration privileged region
Thu Nov 14 11:33:55 2013
Errors in file /install/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_rbal_26408.trc:
ORA-17503: ksfdopn:3 Failed to open file /vote_nfs/vote_3rd
ORA-17500: ODM err:Operation not permitted
Thu Nov 14 11:33:55 2013
Errors in file /install/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_33427.trc:
ORA-17503: ksfdopn:3 Failed to open file /vote_nfs/vote_3rd
ORA-17500: ODM err:Operation not permitted
NOTE: Assigning number (1,3) to disk (/vote_nfs/vote_3rd)
NOTE: requesting all-instance membership refresh for group=1
Thu Nov 14 11:33:55 2013
ORA-15025: could not open disk "/vote_nfs/vote_3rd"
ORA-17503: ksfdopn:3 Failed to open file /vote_nfs/vote_3rd
ORA-17500: ODM err:Operation not permitted
WARNING: Read Failed. group:1 disk:3 AU:0 offset:0 size:4096
path:Unknown disk
incarnation:0xeada1488 asynchronous result:'I/O error'
subsys:Unknown library krq:0x7f715f012d50 bufp:0x7f715e95d600 osderr1:0x0 osderr2:0x0
IO elapsed time: 0 usec Time waited on I/O: 0 usec
NOTE: Disk OCRVOTE_0003 in mode 0x7f marked for de-assignment
Errors in file /install/app/oracle/diag/asm/+asm/+ASM1/trace/+ASM1_ora_33427.trc (incident=83441):
ORA-00600: internal error code, arguments: [kfgscRevalidate_1], [1], [0], [], [], [], [], [], [], [], [], []
ORA-15080: synchronous I/O operation failed to read block 0 of disk 3 in disk group OCRVOTE
This happens because with 12c direct NFS is used by default and it will use ports above 1024 to initiate connections. On the other hand there is a default option on the NFS server – secure which will require any incoming connections from ports below 1024: secure This option requires that requests originate on an internet port less than IPPORT_RESERVED (1024). This option is on by default. To turn it off, specify insecure.
The solution for that is to add insecure parameters to the exporting NFS server, remount the NFS share and then retry the above operation.
For more information refer to:
12c GI Installation with ASM on NFS Disks Fails with ORA-15018 ORA-15072 ORA-15080 (Doc ID 1555356.1)
I was recently configuring backup on the customers Exadata with IBM TSM Data Protection for Oracle and run into weird RMAN error. The configuration was Oracle Database 11.2, TSM client version 6.1 and TSM Server version 5.5 and this was the error:
[oracle@oraexa01 ~]$ rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Wed Jan 29 16:41:54 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: TESTDB (DBID=2128604199)
RMAN> run {
2> allocate channel c1 device type 'SBT_TAPE';
3> }
using target database control file instead of recovery catalog
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03009: failure of allocate command on c1 channel at 01/29/2014 16:42:01
ORA-19554: error allocating device, device type: SBT_TAPE, device name:
ORA-27000: skgfqsbi: failed to initialize storage subsystem (SBT) layer
Linux-x86_64 Error: 106: Transport endpoint is already connected
Additional information: 7011
ORA-19511: Error received from media manager layer, error text:
SBT error = 7011, errno = 106, sbtopen: system error
You get this message because the Tivoli Storage Manager API error log file (errorlogname option specified in the dsm.sys file) is not writable by the Oracle user.
Just change the file permissions or change the parameter to point to a file under /<writable_path>/ and retry your backup:
[oracle@oraexa01 ~]$ rman target /
Recovery Manager: Release 11.2.0.3.0 - Production on Wed Jan 29 16:42:52 2014
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
connected to target database: TESTDB (DBID=2128604199)
RMAN> run {
2> allocate channel c1 device type 'SBT_TAPE';
3> }
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=807 instance=TESTDB device type=SBT_TAPE
channel c1: Data Protection for Oracle: version 5.5.1.0
released channel: c1
Not long time ago customer asked me to create new a database and refresh it from production. Nothing special here database was quickly created and then refreshed using network import for few schemas. Few weeks later I’ve been told that the database has a timestamp problem. The date and time were correct but the time zone was different from the production:
SQL> SQL> SELECT DBTIMEZONE FROM DUAL;
DBTIMEZONE
------
+01:00
Looking back I tried to find why that happened and I quickly found the answer in the documentation: If you do not specify the SET TIME_ZONE clause, then the database uses the operating system time zone of the server.
Of course, by that time the time zone of the server was +1 (Daylight saving time) and the database inherited that time zone. The next logical thing was simply to change the time zone to correct one (UTC):
SQL> ALTER DATABASE SET TIME_ZONE='+00:00';
ALTER DATABASE SET TIME_ZONE='+00:00'
*
ERROR at line 1:
ORA-30079: cannot alter database timezone when database has TIMESTAMP WITH LOCAL TIME ZONE columns
Right, that won’t work for if there are tables with columns of type TIMESTAMP WITH LOCAL TIME ZONE and there is data within these tables. Unfortunately the only solution for that is to export the database, drop the users and then import back the data. Also for the change to take effect database must be restarted.
You can simply list the columns of that type and export just these tables, I had a lot of them and decided to export/import the whole database as it was small and used for testing anyway:
SQL> select owner, table_name, column_name, data_type from all_tab_columns where data_type like '%WITH LOCAL TIME ZONE' and owner='MY_USER';
MY_USER INVENTORY DSTAMP TIMESTAMP(6) WITH LOCAL TIME ZONE
Just a quick post regarding OEM 12c installation where recently I had to install OEM 12c and during the repository configuration step the installation fails with error:
ORA-12801: error signaled in parallel query server P151
This was caused by a known bug which requires decreasing the number of parallel queries of the repository databases and start over the installation. The database had cpu_count set to 64 and parallel_max_servers to 270. After setting the parallel_max_servers to lower value the installation completed successfully.
For more information refer to:
EM 12c: Enterprise Manager Cloud Control 12c Installation Fails At Repository Configuration With Error: ORA-12805: parallel query server died unexpectedly (Doc ID 1539444.1)
On Exadata the local drives on the compute nodes are not big enough to allow larger exports and often dbfs is configured. In my case I had a 1.2 TB dbfs file system mounted under /dbfs_direct/.
While I was doing some exports yesterday I found that my dbfs wasn’t mounted, running quick crsctl command to bring it online failed:
[oracle@exadb01 ~]$ crsctl start resource dbfs_mount -n exadb01
CRS-2672: Attempting to start 'dbfs_mount' on 'exadb01'
CRS-2674: Start of 'dbfs_mount' on 'exadb01' failed
CRS-2679: Attempting to clean 'dbfs_mount' on 'exadb01'
CRS-2681: Clean of 'dbfs_mount' on 'exadb01' succeeded
CRS-4000: Command Start failed, or completed with errors.
It doesn’t give you any error messages or reason why it’s failing, neither the other database and grid infrastructure logs does. The only useful solution is to enable tracing for dbfs client and see what’s happening. To enable tracing edit the mount script and insert the following MOUNT_OPTIONS:
vi $GI_HOME/crs/script/mount-dbfs.sh
MOUNT_OPTIONS=trace_level=1,trace_file=/tmp/dbfs_client_trace.$$.log,trace_size=100
Now start the resource one more time to get the log file generated. You can get this working with the client as well from the command line:
[oracle@exadb01 ~]$ dbfs_client dbfs_user@ -o allow_other,direct_io,trace_level=1,trace_file=/tmp/dbfs_client_trace.$$.log /dbfs_direct
Password:
Fail to connect to database server.
After checking the log file it’s clear now why dbfs was failing to mount, the dbfs database user has expired:
tail /tmp/dbfs_client_trace.100641.log.0
[43b6c940 03/12/14 11:15:01.577723 LcdfDBPool.cpp:189 ] ERROR: Failed to create session pool ret:-1
[43b6c940 03/12/14 11:15:01.577753 LcdfDBPool.cpp:399 ] ERROR: ERROR 28001 - ORA-28001: the password has expired
[43b6c940 03/12/14 11:15:01.577766 LcdfDBPool.cpp:251 ] DEBUG: Clean up OCI session pool...
[43b6c940 03/12/14 11:15:01.577805 LcdfDBPool.cpp:399 ] ERROR: ERROR 24416 - ORA-24416: Invalid session Poolname was specified.
[43b6c940 03/12/14 11:15:01.577844 LcdfDBPool.cpp:444 ] CRIT : Fail to set up database connection.
The account had a default profile which had the default PASSWORD_LIFE_TIME of 180 days:
SQL> select username, account_status, expiry_date, profile from dba_users where username='DBFS_USER';
USERNAME ACCOUNT_STATUS EXPIRY_DATE PROFILE
------------------------------ -------------------------------- ----------------- ------------------------------
DBFS_USER EXPIRED 03-03-14 14:56:12 DEFAULT
Elapsed: 00:00:00.02
SQL> select password from sys.user$ where name= 'DBFS_USER';
PASSWORD
------------------------------
A4BC1A17F4AAA278
Elapsed: 00:00:00.00
SQL> alter user DBFS_USER identified by values 'A4BC1A17F4AAA278';
User altered.
Elapsed: 00:00:00.03
SQL> select username, account_status, expiry_date, profile from dba_users where username='DBFS_USER';
USERNAME ACCOUNT_STATUS EXPIRY_DATE PROFILE
------------------------------ -------------------------------- ----------------- ------------------------------
DBFS_USER OPEN 09-09-14 11:09:43 DEFAULT
SQL> select * from dba_profiles where resource_name = 'PASSWORD_LIFE_TIME';
PROFILE RESOURCE_NAME RESOURCE LIMIT
------------------------------ -------------------------------- -------- ----------------------------------------
DEFAULT PASSWORD_LIFE_TIME PASSWORD 180
After resetting database user password dbfs successfully mounted!
If you are using dedicated database for dbfs make sure you have set the password_life_time to unlimited to avoid similar issues.
The following blog post continue the EMDIAG repvfy kit series and will focus on how to troubleshoot and solve the problems reported by the kit.
The repository verification kit reports number of problems with our repository which we are about to troubleshoot and solve one by one. It’s important to notice that some of the problem are related so solving one problem could also solve another one.
Here is the output I’ve got for my OEM repository:
-- --------------------------------------------------------------------- --
-- REPVFY: 2014.0114 Repository: 12.1.0.3.0 23-Jan-2014 13:35:41 --
---------------------------------------------------------------------------
-- Module: Test: 0, Level: 2 --
-- --------------------------------------------------------------------- --
verifyAGENTS
1004. Stuck PING jobs: 10
verifyASLM
verifyAVAILABILITY
1002. Disabled response metrics (16570376): 2
verifyBLACKOUTS
verifyCAT
verifyCORE
verifyECM
1002. Unregistered ECM metadata tables: 2
verifyEMDIAG
1001. Undefined verification modules: 1
verifyEVENTS
verifyEXADATA
2001. Exadata plugin version mismatches: 5
verifyJOBS
2001. System jobs running for more than 24hr: 1
verifyJVMD
verifyLOADERS
verifyMETRICS
verifyNOTIFICATIONS
verifyOMS
1002. Stuck PING jobs: 10
verifyPLUGINS
1003. Plugin metadata versions out of sync: 13
verifyREPOSITORY
verifyTARGETS
1021. Composite metric calculation with inconsistent dependant metadata versions: 3
2004. Targets without an ORACLE_HOME association: 2
2007. Targets with unpromoted ORACLE_HOME target: 2
verifyUSERS
I usually follow this sequence of actions when troubleshoot repository problems:
1. Verify the module with detail option. Increasing the level also might show more problems or problems related to the current one.
2. Dump the module and check for any unusual activity.
3. Check repository database alert log for any errors.
4. Check emagent logs for any errors.
5. Check OMS logs for any errors.
Troubleshoot Stuck PING jobs
Looking on the first problem reported for verifyAGENTS – Stuck PING jobs we can easily spot the relation between verifyAGENTS, verifyJOBS and verifyOMS modules where the same problem is occurring. For some reason there are ten ping jobs which are stuck and running for more than 24hrs.
The best approach would be running verify against any of these modules with the –detail option. This will show more information and eventually help analyze the problem. Running detail report for AGENTS and OMS didn’t helped and didn’t show much information related to the stuck pings jobs. However running detailed report for the JOBS we were able to identify the job_id, job_name and when the job was started:
So we can see that the stuck job was started on 19:02 at 3rd of December and the time of check was 23rd of January.
Now we can say that there is a problem with the jobs rather than agents or oms, the problems at these two modules appeared as a result of the stuck job and we should be focusing on the JOBS module.
Running analyze against the job will show the same thing as verify with detail option, it’s usage would be appropriate if we got multiple jobs issues and want to see the details for particular one.
Dumping the job will show a lot of info from MGMT_ tables that’s useful, of particular interest are the details of the execution:
Again we can confirm that the job is still running and the next step would be to dump the execution which will show us on which step the job is waiting/hanging. That’s just an example because in my case I didn’t have any steps in my job execution:
Checking job system health could also be useful by showing some job history, scheduled jobs and some performance metrics:
[oracle@oem bin]$ ./repvfy dump job_health
Back to our problem we may query MGMT_JOB to get the job name and confirm that’s system job run by SYSMAN:
SQL> SELECT JOB_ID, JOB_NAME,JOB_OWNER, JOB_DESCRIPTION,JOB_TYPE,SYSTEM_JOB,JOB_STATUS FROM MGMT_JOB WHERE UPPER(JOB_NAME) like '%PINGCFM%'
JOB_ID JOB_NAME JOB_OWNER JOB_DESCRIPTION JOB_TYPE SYSTEM_JOB JOB_STATUS
-------------------------------- ------------------------------------------------------------ ---------- ------------------------------------------------------------ --------------- ---------- ----------
ECA6DE1A67B43914E0432084800AB548 PINGCFMJOB_ECA6DE1A67B33914E0432084800AB548 SYSMAN This is a Confirm EMD Down test job ConfirmEMDDown 2 0
We may try to stop the job using emcli and job name:
[oracle@oem bin]$ emcli stop_job -name=PINGCFMJOB_ECA6DE1A67B33914E0432084800AB
Error: The job/execution is invalid (or non-existent)
If that doesn’t work then use emdiag kit to cleanup the repository part:
./repvfy verify jobs -test 1998 -fix
Please enter the SYSMAN password:
-- --------------------------------------------------------------------- --
-- REPVFY: 2014.0114 Repository: 12.1.0.3.0 27-Jan-2014 18:18:36 --
---------------------------------------------------------------------------
-- Module: JOBS Test: 1998, Level: 2 --
-- --------------------------------------------------------------------- --
-- -- -- - Running in FIX mode: Data updated for all fixed tests - -- -- --
-- --------------------------------------------------------------------- --
The repository is now ok but it will not remove the stuck thread at the OMS level. In order for the OMS to get healthy again it needs to be restarted:
cd $OMS_HOME/bin
emctl stop oms
emctl start oms
After OMS was restarted there were no stuck jobs anymore!
I’ve still wanted to know why that happened. Although there were few bugs at MOS they were no very applicable and didn’t found any of the symptoms in my case. After checking repository database alertlog I found few disturbing messages:
Tns error struct:
Time: 03-DEC-2013 19:04:01
TNS-12637: Packet receive failed
ns secondary err code: 12532
.....
opiodr aborting process unknown ospid (15301) as a result of ORA-609
opiodr aborting process unknown ospid (15303) as a result of ORA-609
opiodr aborting process unknown ospid (15299) as a result of ORA-609
2013-12-03 19:07:58.156000 +00:00
I also found a lot of similar message on the target databases:
Time: 03-DEC-2013 19:05:08
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
That pretty much matches the time when the job got stuck – 19:02:29. So I assume there was some network glitch at that time causing the ping job to stuck. The solution was simply run the repvfy with fix option and then restart the OMS service.
In case after restart the job is stuck again consider increasing the oms property oracle.sysman.core.conn.maxConnForJobWorkers. Consider the following note if that’s the case:
I’ll be speaking at the spring conference of the BGOUG held between 13th and 15th of June. I was a regular attendee of the conference for eight years in a row but since I moved to UK I had to skip the last two conferences. My session is about Oracle GoldenGate – it will cover the basics, components, usage scenarios, installation and configuration, trail files and GG records and many more.
I’m more than happy that I will be speaking at this year’s Oracle Open World. The first and only time I attended was back in 2010 and now I’m not only attending but speaking as well!
Both with Jason Arneil will talk about what we’ve learned on our Exadata implementations with two of the biggest UK retailers so please join us:
Session ID: CON2224
Session Title: Oracle Exadata Migrations: Lessons Learned from Retail
Venue / Room: Moscone South – 310
Date and Time: 9/30/14, 15:45 – 16:30
I would like to thank E-DBA and especially Jason for making this happen!
So many things happened in the past six months, I really can’t tell how quickly the time passed. As people say the life is what happens to you while you are making plans for the future.
I wish I had the time to blog more in the past year and I plan to change this in the New Year!
2014 was really successful for me. I worked on some really interesting projects, configured my first Exadata, migrated a few more databases to Exadata and I faced some challenging problems. This year is about to be no different and I’ve already started another interesting and challenging migration.
Same year I presented at Oracle Open World for which I would like to thank Jason Arneil for the joint presentation and E-DBA for making this happen! At the same time e-DBA have been awarded the Oracle Excellence Award Specialised Global Partner of year for Oracle Engineered Systems.
Last but not least I was honoured with Employee of the Year award last month, again thank you E-DBA team!
Just a short post on a problem I encountered recently.
I had to install 11.2 GI and right after running the installer I got a message saying permission denied. Below is the exact error:
[oracle@testdb grid]$ ./runInstaller -silent -showProgress -waitforcompletion -responseFile /u01/software/grid/response/grid_install_20140114.rsp
Starting Oracle Universal Installer...
Checking Temp space: must be greater than 120 MB. Actual 7507 MB Passed
Checking swap space: must be greater than 150 MB. Actual 8191 MB Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2015-01-15_12-12-20PM. Please wait ...Error in CreateOUIProcess(): 13
: Permission denied
Quickly tracing the process I can see that it fails to execute the java installer:
Indeed, the noexec option will not let you execute binaries that are on that partition. This server was built by a hosting provider and I guess this was part of thir default deployment process.
After removing the option and remounting /tmp (mount -o remount /tmp), installer was able to run successfully.