Saturday, September 12, 2020

gridSetup.sh -applyRU ERROR: The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed.

I plan to install RAC VM on my notebook. I created new VM from Oracle Linux 7.7 and unzipped 19.3 Grid home and need to apply 19.7 RU.


During the gridSetup.sh -applyRU procedure we sometime may obtain a failing message:

$ ./gridSetup.sh -silent -applyRU /stage/30899722

ERROR: The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed.

And some bloggers may give you the advice: "When the command gridSetup.sh fails,  it makes the NEW_HOME unusable. The only way to get around this issue currently is to clean the contents out of that HOME , and unzip the gold image again. "

There is another solution: add the Grid home to oraInventory and use the opatch apply.

I have fresh Oracle Linux 7.7. VM and Grid 19.3 base image and 19.7 patch.

 

[grid@node1 grid]$ opatch lspatches
Argument(s) Error... Oracle Home's central inventory is not found.
Please check the arguments and try again.
OPatch failed with error code 135


This error because there is no Grid home in fresh VM.

So we need to register Grid home in  Central Inventory.

[grid@node1 grid]$ mkdir /u01/app/oraInventory
 

[grid@node1 grid]$ $OH/oui/bin/runInstaller -silent -attachHome ORACLE_HOME="/u01/app/19.0.0/grid" ORACLE_HOME_NAME="OraGI197Home1" CRS="true"
 

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5118 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
You can find the log of this install session at:
 /u01/app/oraInventory/logs/AttachHome2020-09-09_10-13-14PM.log
Please execute the '/u01/app/oraInventory/orainstRoot.sh' script at the end of the session.
'AttachHome' was successful.

 

 [grid@node1 30899722]$ pwd
/stage/30899722

[grid@node1 30899722]$ ll
total 132
drwxr-x--- 5 grid oinstall     81 Apr 11 01:16 30869156
drwxr-x--- 5 grid oinstall     62 Apr 11 01:18 30869304
drwxr-x--- 5 grid oinstall     62 Apr 11 01:19 30894985
drwxr-x--- 4 grid oinstall     48 Apr 11 01:18 30898856
drwxr-x--- 2 grid oinstall   4096 Apr 11 01:20 automation
-rw-rw-r-- 1 grid oinstall   5054 Apr 11 04:37 bundle.xml
-rw-r--r-- 1 grid oinstall 122266 Apr 11 04:25 README.html
-rw-r--r-- 1 grid oinstall      0 Apr 11 01:20 README.txt


[grid@node1 30898856]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.21
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/19.0.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19.0.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.21
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2020-09-12_15-42-19PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   30898856

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/19.0.0/grid')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '30898856' to OH '/u01/app/19.0.0/grid'

Patching component oracle.tomcat.crs, 19.0.0.0.0...
Patch 30898856 successfully applied.

Sub-set patch [29401763] has become inactive due to the application of a super-set patch [30898856].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2020-09-12_15-42-19PM_1.log

OPatch succeeded.

Next patch 30898856:

[grid@node1 30899722]$ ll
drwxr-x--- 5 grid oinstall     81 Apr 11 01:16 30869156
drwxr-x--- 5 grid oinstall     62 Apr 11 01:18 30869304
drwxr-x--- 5 grid oinstall     62 Apr 11 01:19 30894985
drwxr-x--- 4 grid oinstall     48 Apr 11 01:18 30898856
drwxr-x--- 2 grid oinstall   4096 Apr 11 01:20 automation
-rw-rw-r-- 1 grid oinstall   5054 Apr 11 04:37 bundle.xml
-rw-r--r-- 1 grid oinstall 122266 Apr 11 04:25 README.html
-rw-r--r-- 1 grid oinstall      0 Apr 11 01:20 README.txt
 

[grid@node1 30899722]$ pwd
/stage/30899722


[grid@node1 30899722]$ opatch lspatches
30898856;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30898856)
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517247;ACFS RELEASE UPDATE 19.3.0.0.0 (29517247)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)

OPatch succeeded.
 

[grid@node1 30899722]$ cd 30869304
[grid@node1 30869304]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.21
Copyright (c) 2020, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/19.0.0/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/19.0.0/grid/oraInst.loc
OPatch version    : 12.2.0.1.21
OUI version       : 12.2.0.7.0
Log file location : /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2020-09-12_16-17-11PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   30869304

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/19.0.0/grid')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '30869304' to OH '/u01/app/19.0.0/grid'

Patching component oracle.usm, 19.0.0.0.0...
Patch 30869304 successfully applied.

Sub-set patch [29517247] has become inactive due to the application of a super-set patch [30869304].
Please refer to Doc ID 2161861.1 for any possible further required actions.
Log file location: /u01/app/19.0.0/grid/cfgtoollogs/opatch/opatch2020-09-12_16-17-11PM_1.log

OPatch succeeded.

[grid@node1 30869304]$ opatch lspatches
30869304;ACFS RELEASE UPDATE 19.7.0.0.0 (30869304)
30898856;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30898856)
29585399;OCW RELEASE UPDATE 19.3.0.0.0 (29585399)
29517242;Database Release Update : 19.3.0.0.190416 (29517242)

OPatch succeeded.

...

And after apply all 4 patches:

[grid@node1 30869156]$ opatch lspatches
30869156;Database Release Update : 19.7.0.0.200414 (30869156)
30894985;OCW RELEASE UPDATE 19.7.0.0.0 (30894985)
30869304;ACFS RELEASE UPDATE 19.7.0.0.0 (30869304)
30898856;TOMCAT RELEASE UPDATE 19.0.0.0.0 (30898856)


And the last step - Detach Grid home from Central Inventory:

[grid@node1 30869156]$ $OH/oui/bin/runInstaller -silent -detachHome ORACLE_HOME="/u01/app/19.0.0/grid" ORACLE_HOME_NAME="OraGI197Home1"
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5118 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
You can find the log of this install session at:
 /u01/app/oraInventory/logs/DetachHome2020-09-12_04-33-16PM.log
'DetachHome' was successful.


$ opatch lspatches
Inventory load failed... LsPatchesSession::loadAndPrintInstalledPatch()
LsPatchesSession failed: OPatch failed to locate Central Inventory.
Possible causes are:
    The Central Inventory is corrupted
    The oraInst.loc file specified is not valid.

OPatch failed with error code 2


I'm not sure the way described above is suppported solution.

The solution give us the Note Executing "gridSetup.sh" Fails with "ERROR: The home is not clean" (Doc ID 2279633.1)

Tuesday, September 8, 2020

ORA-12152: TNS: Unable to send break message

 The new Exadata customer came with a problem "ORA-12152: TNS: Unable to send break message".  (What is the in-band breaking and out-of-band breaking we know from Tanel Poder's post:
https://tanelpoder.com/2008/02/05/oracle-hidden-costs-revealed-part-1/ ).

The customer say: 

"  We're testing upgrade to Oracle 19c from 11.2.0.4. And on 19c we got a problem with interrupting the connection between the application (pl/sql developer, sqlplus) on the client machine and the Oracle database: ORA-12152: TNS: Unable to send break message.

This problem is show stopper to upgrade to 19c!

The connection between the application (pl / sql developer, sqlplus) on the client machine and the process on the Oracle server is interrupted if the client application does not generate traffic for 60 minutes. The application is waiting for a response from the long procedure. After the procedure on the server is actually finished, the application is still waiting for the procedure to complete. When trying to interrupt the connection from the application side, we get ORA-12152: TNS: Unable to send break message (Cause: Unable to send break message).

Network engineers don't see any problems.
Simple test case show this problem:

begin
 dbms_lock.sleep(
3590);
end
;
/

Finished successully.
 


But


begin

 dbms_lock.sleep(
3610);
end
;
/
is finished unsuccessfully.
 

 "

Thanks to the detailed description, the customer's problem became clear. It is not a In-Band or OOB breaking. It is actually Dead Connection Detection : DCD was enchanced in 12c to reduce detection time.

DCD is mechanism which allow the RDBMS server to check if the client is alive.

This feature is configured on server side using sqlnet.expire_time in sqlnet.ora. The probe packet is sent to the client side every sqlnet.expire_time minutes. If database server have got an error then client is dead and server can close this connection. In pre-12c releases this work was done by NS layer in SQL*Net . 

The 12c mechanism is intended to reduce the detection time and minimize load from RDBMS. This new mechanism is based on the TCP-keepalive property of the socket. With this approach TCP-keepalive probes are sent by OS after the connection has been idle for some time. Because these probes are implemented on the OS level then RDBMS rely on socket state (don't need send its own probes).

But in 12c we still able to use the sqlnet.expire_time.
After the customer have set sqlnet.expire_time=10 the error "ORA-12152: TNS: Unable to send break message" disappeared.

  




Saturday, August 29, 2020

inmemory_force = CELLMEMORY_LEVEL

The Oracle Database InMemory feature requires the amount of physical memory from the database server to place IM Cache and some amount CPU resources.

The Exadata has the ability to use InMemory database feature on the cells. To use InMemory feature on cells you should enable the INMEMORY_SIZE at the Database nodes. So, Exadata owners still need to spend the amount of physical memory on the Database servers and memory on the cells.

The 19.8 PSU have brought the new great feature: inmemory_force = CELLMEMORY_LEVEL which offload 100% IM load to the cell level that significantly improve offloading for IM operations.

Starting with PSU 19.8, you can use the CellMemory feature without enabling the IM column store on the DB level. Starting with PSU 19.8 you can use combination of parameters

INMEMORY_FORCE=CELLMEMORY_LEVELINMEMORY_SIZE=0

to enable IM column store at cell level and minimize IM Cache memory consumption at DB level.

From my point of view this feature will greatly improve the scalability of Exadata database machine for IM operations, because the more cells the more physical memory, IM-cache and CPU cores. This feature bring the more value to the Exadata cells.


 

Friday, December 6, 2019

Exadata X8-2/X8M-2 don't support 1GbE client network


 One our Exadata customer started using Exadata from X3-2 model, seven years ago.
Now it still using 1GbE (copper Gigabit Ethernet ) in its enterprise network.

But today 1GbE for Exadata X8-2/X8M-2 is the problem: as became known Exadata X8-2/X8M-2 don't support 1GbE. 


Here answer from the SR: 
" We have checked on this whether any of the ports on X8s are supported to run at 1GbE, and the answer was 'no', we don't test at 1GbE, thus we don't support 1GbE. This is for both "copper"/RJ-45 10GBase-T, and the SFP28 with 10GbE transceivers. While the underlying hardware may/does support 1GbE, as an engineered system, X8s/X8Ms do not support the same. "


But the good news: "Don't support" doesn't mean "don't work" !!!

The documentation say the X8-2 server support the 1GbE copper:
https://docs.oracle.com/cd/E93361_01/html/E93391/gtaii.html

And the Exadata's rear pannel view shows 1GbE:

https://support.oracle.com/handbook_partner/Systems/Exadata_X8_2/images/X8_2_rear_zoom.html


 https://support.oracle.com/handbook_partner/Systems/Exadata_X8_2/images/X8_2_rear_zoom.html





So we asked our engineer to test 1GbE ports on X8-2 and he confirmed ports NET1/NET2 working at 1GbE .  He manually set 1GbE on ports and copied some files through these ports:

[root@node8 ~]# ethtool eth1
Settings for eth1:
        Supported ports: [ TP ]
        Supported link modes:   1000baseT/Full
                                10000baseT/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: Yes
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseT/Full
                                10000baseT/Full
        Advertised pause frame use: No
        Advertised auto-negotiation: Yes
        Advertised FEC modes: Not reported
        Speed: 1000Mb/s
        Duplex: Full
        Port: Twisted Pair
        PHYAD: 12
        Transceiver: internal
        Auto-negotiation: on
        MDI-X: Unknown
        Supports Wake-on: d
        Wake-on: d
        Current message level: 0x00000000 (0)

        Link detected: yes

gridSetup.sh -applyRU ERROR: The home is not clean. This home cannot be used since there was a failed OPatch execution in this home. Use a different home to proceed.

I plan to install RAC VM on my notebook. I created new VM from Oracle Linux 7.7 and unzipped 19.3 Grid home and need to apply 19.7 RU. Duri...