Part 1: http://exadata-dba.blogspot.com/2018/12/upgrade-gi-from-122-to-184-on-virtual.html
replace /u01/app/18c/grid/OPatch/opatchprereqs/oui/knowledgesrc.xml? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
and answer A (overwrite allways).
The same opatch file is suit for GI and RDBMS homes.
The response file for this session can be found at: /u01/app/18c/grid/install/response/grid_2018-12-20_11-55-36AM.rsp
You can find the log of this install session at: /u01/app/oraInventory/logs/GridSetupActions2018-12-20_11-55-36AM/gridSetupActions2018-12-20_11-55-36AM.log
Execute /u01/app/18c/grid/rootupgrade.sh on the following nodes:
[var01vm03]
Successfully Setup Software.
As install user, execute the following command to complete the configuration.
[root@VM03]# mkdir -p /u01/app/12.1.0.2/grid/crs/install/
[root@VM03]# cp /u01/app/12.2.0.1/grid/crs/install/s_crsconfig_var01vm03_env.txt /u01/app/12.1.0.2/grid/crs/install/
1.
Read the
documentation: https://docs.oracle.com/en/database/oracle/oracle-database/18/cwlin/upgrading-oracle-grid-infrastructure.html#GUID-DF76F201-3374-486F-9D19-06276764569F
18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata
Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)
Patches to apply before upgrading Oracle GI and DB to 18c or downgrading
to previous release (Doc ID 2414935.1)
2.
Download GI
home and appropriate files: GI
base release, latest
opatch, GI RU (p28689122_184000_Linux-x86-64.zip).
3.
Create a new Oracle Grid Infrastructure Oracle home. In this installation I suppose that
GI files were unzipped at previous stem as root user in /u01/app/18c/grid file
system, therefore we need to login to VM and chown -R oracle:oinstall
/u01/app/18c/grid. In this
environment GI and RDBMS software are under oracle user, so we have to change ownership
of GI to oracle:oinstall. Usually I
prefer to have separate ownership: grid user for GI and oracle user for RDBMS ,
in this case you should chown grid:oinstall $GI_HOME.
As root@ DomU:
# mkdir -p
/u01/app/18c/grid
# cat
/etc/fstab
# echo
"/dev/xvdg
/u01/app/18c/grid
ext4 defaults 1 1" >>/etc/fstab
# cat
/etc/fstab
# mount
/u01/app/18c/grid
# chown -R
oracle:oinstall /u01/app/18c/grid
4.
Next step
is to renew opatch software in GI home. The last opatch version for today is
16, and I renewed opatch simply unzipping it into GI home:
# su - oracle
$ unzip /store/p6880880_180000_Linux-x86-64.zip –d /u01/app/18c/grid
Archive: /u01/opatch/p6880880_180000_Linux-x86-64.zip
Archive: /u01/opatch/p6880880_180000_Linux-x86-64.zip
replace /u01/app/18c/grid/OPatch/opatchprereqs/oui/knowledgesrc.xml? [y]es, [n]o, [A]ll, [N]one, [r]ename: A
and answer A (overwrite allways).
The same opatch file is suit for GI and RDBMS homes.
5.
Next step
is to apply latest GI RU patch (18.4 in my case) to the base release (18.3 in
my case):
$ cd /u01/app/18c/grid
$ ./gridSetup.sh -silent -applyPSU /store/tmpEXAPSU/28689122/28659165
At this
step I patched the base release binaries. Patch 18.4 adds about 2,5g to the
GI_HOME.
6.
Next step
is to edit the $GI_HOME/install/response/gridsetup.rsp file. I changed 2 empty lines.
Before edit:
oracle.install.option=
ORACLE_BASE=
oracle.install.option=
ORACLE_BASE=
After edit:
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle
7.
The final
step consist of run gridSetup.sh script.
$ ./gridSetup.sh
-silent -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
In my 1st
run
$ ./gridSetup.sh
-silent -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp
Launching Oracle Grid Infrastructure Setup Wizard...
I obtained
the failed message:
[FATAL] [INS-13019]
Some mandatory prerequisites are not met. These prerequisites cannot be
ignored.
ACTION:
Identify the list of failed prerequisite checks from the log:
/u01/app/oraInventory/logs/GridSetupActions2018-12-20_11-51-23AM/gridSetupActions2018-12-20_11-51-23AM.log.
Then either from the log file or from installation manual find the appropriate
configuration to meet the prerequisites and fix it manually.
The log file
say that “ORACLE_BASE directory /u01/app/oracle is not writable”. I double
checked writeability of ORACLE_BASE = /u01/app/oracle and found it writable. So
I decided ignore this fatal error with –skipPrereqs
option:
$ ./gridSetup.sh
-silent -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp
-skipPrereqs
Launching Oracle Grid Infrastructure Setup Wizard...
The response file for this session can be found at: /u01/app/18c/grid/install/response/grid_2018-12-20_11-55-36AM.rsp
You can find the log of this install session at: /u01/app/oraInventory/logs/GridSetupActions2018-12-20_11-55-36AM/gridSetupActions2018-12-20_11-55-36AM.log
As a root user, execute the following script(s):
1. /u01/app/18c/grid/rootupgrade.sh
1. /u01/app/18c/grid/rootupgrade.sh
Execute /u01/app/18c/grid/rootupgrade.sh on the following nodes:
[var01vm03]
Successfully Setup Software.
As install user, execute the following command to complete the configuration.
/u01/app/18c/grid/gridSetup.sh -executeConfigTools -responseFile
/u01/app/18c/grid/install/response/gridsetup.rsp [-silent]
The gridSetup.sh
completed successfully, so we should run next 2 steps:
1. /u01/app/18c/grid/rootupgrade.sh
2.
gridSetup.sh –executeConfigTools …
8.
Next error becomes
from roorupgrade.sh script:
[root@VM03]# /u01/app/18c/grid/rootupgrade.sh
Performing root user operation.
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/18c/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/18c/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as
needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/18c/grid/crs/install/crsconfig_params
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/18c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/crsdata/var01vm03/crsconfig/rootcrs_var01vm03_2018-12-20_12-04-43AM.log
/u01/app/oracle/crsdata/var01vm03/crsconfig/rootcrs_var01vm03_2018-12-20_12-04-43AM.log
2018/12/20 12:04:45
CLSRSC-697: Failed to get the value of environment variable 'TZ' from the
environment file
'/u01/app/12.1.0.2/grid/crs/install/s_crsconfig_var01vm03_env.txt'
Died at
/u01/app/18c/grid/crs/install/crsutils.pm line 17076.
There is no
12.1 GI in this configuration. It was removed a half year ago. So the simple
solution – find tis file and temporary copy appropriate file to appropriate place:
[root@VM03]# find /u01 – name s_crsconfig_var01vm03_env.txt
/u01/app/12.2.0.1/grid/crs/install/s_crsconfig_var01vm03_env.txt
/u01/app/12.2.0.1/grid/crs/install/s_crsconfig_var01vm03_env.txt
[root@VM03]# mkdir -p /u01/app/12.1.0.2/grid/crs/install/
[root@VM03]# cp /u01/app/12.2.0.1/grid/crs/install/s_crsconfig_var01vm03_env.txt /u01/app/12.1.0.2/grid/crs/install/
[root@VM03]# chown -R oracle:oinstall
/u01/app/12.1.0.2
And run
rootupgrade.sh one more time:
[root@VM03]# /u01/app/18c/grid/rootupgrade.sh
9.
Next step
is to run executeConfigTools :
[oracle@VM03]$ /u01/app/18c/grid/gridSetup.sh
-executeConfigTools -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp
-silent
10. MGMT DB
After
upgrade we noticed that there is no MGMT DB in our GI 18.4. The investigation
show that there were no MGMT DB in the previous 12.2 GI. So, the upgrade 12.2.0.1
-> 18c completed successfully while absence MGMT DB.
How to add
the MGMT DB to GI 18.4:
/u01/app/18c/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase
true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType
ASM -diskGroupName DATAC8
-datafileJarLocation /u01/app/18c/grid/assistants/dbca/templates -characterset
AL32UTF8 -autoGeneratePasswords –skipUserTemplateCheck
$ /u01/app/18c/grid/bin/mgmtca –local
-------------------------------------------------------------------------------------
The log of
rootupgrade.sh script, all 19 steps:
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/18c/grid
Copying
dbhome to /usr/local/bin ...
Copying
oraenv to /usr/local/bin ...
Copying
coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as
needed by
Database Configuration Assistant when a database is
created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file:
/u01/app/18c/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/oracle/crsdata/var01vm03/crsconfig/rootcrs_var01vm03_2018-12-20_12-14-20AM.log
2018/12/20 12:14:32 CLSRSC-595: Executing upgrade step
1 of 19: 'UpgradeTFA'.
2018/12/20 12:14:32 CLSRSC-4015: Performing install or
upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/20 12:15:20 CLSRSC-4003: Successfully patched
Oracle Trace File Analyzer (TFA) Collector.
2018/12/20 12:15:20 CLSRSC-595: Executing upgrade step
2 of 19: 'ValidateEnv'.
2018/12/20 12:15:24 CLSRSC-595: Executing upgrade step
3 of 19: 'GetOldConfig'.
2018/12/20 12:15:24 CLSRSC-464: Starting retrieval of
the cluster configuration data
2018/12/20 12:15:28 CLSRSC-692: Checking whether CRS
entities are ready for upgrade. This operation may take a few minutes.
2018/12/20 12:16:59 CLSRSC-693: CRS entities
validation completed successfully.
2018/12/20 12:17:02 CLSRSC-515: Starting OCR manual
backup.
2018/12/20 12:17:08 CLSRSC-516: OCR manual backup
successful.
2018/12/20 12:17:13 CLSRSC-486:
At this stage
of upgrade, the OCR has changed.
Any attempt to
downgrade the cluster after this point will require a complete cluster outage
to restore the OCR.
2018/12/20 12:17:13 CLSRSC-541:
To downgrade
the cluster:
1. All nodes
that have been upgraded must be downgraded.
2018/12/20 12:17:13 CLSRSC-542:
2. Before
downgrading the last node, the Grid Infrastructure stack on all other cluster
nodes must be down.
2018/12/20 12:17:13 CLSRSC-615:
3. The last
node to downgrade cannot be a Leaf node.
2018/12/20 12:17:16 CLSRSC-465: Retrieval of the
cluster configuration data has successfully completed.
2018/12/20 12:17:16 CLSRSC-595: Executing upgrade step
4 of 19: 'GenSiteGUIDs'.
2018/12/20 12:17:22 CLSRSC-595: Executing upgrade step
5 of 19: 'UpgPrechecks'.
2018/12/20 12:17:24 CLSRSC-363: User ignored
prerequisites during installation
2018/12/20 12:17:31 CLSRSC-595: Executing upgrade step
6 of 19: 'SaveParamFile'.
2018/12/20 12:17:36 CLSRSC-595: Executing upgrade step
7 of 19: 'SetupOSD'.
2018/12/20 12:17:36 CLSRSC-595: Executing upgrade step
8 of 19: 'PreUpgrade'.
2018/12/20 12:18:34 CLSRSC-470: Starting non-rolling
migration of Oracle ASM
2018/12/20 12:18:34 CLSRSC-482: Running command:
'/u01/app/18c/grid/bin/asmca -silent -upgradeNodeASM -nonRolling true
-oldCRSHome /u01/app/12.2.0.1/grid -oldCRSVersion 12.2.0.1.0 -firstNode true
-startRolling false '
ASM configuration upgraded in local node successfully.
2018/12/20 12:18:41 CLSRSC-471: Successfully initiated
non-rolling migration of Oracle ASM
2018/12/20 12:18:43 CLSRSC-466: Starting shutdown of
the current Oracle Grid Infrastructure stack
2018/12/20 12:19:25 CLSRSC-467: Shutdown of the
current Oracle Grid Infrastructure stack has successfully completed.
2018/12/20 12:19:28 CLSRSC-595: Executing upgrade step
9 of 19: 'CheckCRSConfig'.
2018/12/20 12:19:28 CLSRSC-595: Executing upgrade step
10 of 19: 'UpgradeOLR'.
2018/12/20 12:19:35 CLSRSC-595: Executing upgrade step
11 of 19: 'ConfigCHMOS'.
2018/12/20 12:19:35 CLSRSC-595: Executing upgrade step
12 of 19: 'UpgradeAFD'.
2018/12/20 12:19:40 CLSRSC-595: Executing upgrade step
13 of 19: 'createOHASD'.
2018/12/20 12:19:44 CLSRSC-595: Executing upgrade step
14 of 19: 'ConfigOHASD'.
2018/12/20 12:19:45 CLSRSC-329: Replacing Clusterware
entries in file 'oracle-ohasd.conf'
2018/12/20 12:20:16 CLSRSC-595: Executing upgrade step
15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High
Availability Services-managed resources on 'var01vm03'
CRS-2793: Shutdown of Oracle High Availability
Services-managed resources on 'var01vm03' has completed
CRS-4133: Oracle High Availability Services has been
stopped.
CRS-4123: Oracle High Availability Services has been
started.
2018/12/20 12:20:54 CLSRSC-595: Executing upgrade step
16 of 19: 'InstallKA'.
2018/12/20 12:21:18 CLSRSC-595: Executing upgrade step
17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High
Availability Services-managed resources on 'var01vm03'
CRS-2793: Shutdown of Oracle High Availability
Services-managed resources on 'var01vm03' has completed
CRS-4133: Oracle High Availability Services has been
stopped.
CRS-4123: Starting Oracle High Availability
Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on
'var01vm03'
CRS-2672: Attempting to start 'ora.mdnsd' on
'var01vm03'
CRS-2676: Start of 'ora.mdnsd' on 'var01vm03'
succeeded
CRS-2676: Start of 'ora.evmd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'var01vm03'
CRS-2676: Start of 'ora.gpnpd' on 'var01vm03'
succeeded
CRS-2672: Attempting to start 'ora.gipcd' on
'var01vm03'
CRS-2676: Start of 'ora.gipcd' on 'var01vm03'
succeeded
CRS-2672: Attempting to start 'ora.crf' on 'var01vm03'
CRS-2672: Attempting to start 'ora.cssdmonitor' on
'var01vm03'
CRS-2676: Start of 'ora.cssdmonitor' on 'var01vm03'
succeeded
CRS-2672: Attempting to start 'ora.cssd' on
'var01vm03'
CRS-2672: Attempting to start 'ora.diskmon' on
'var01vm03'
CRS-2676: Start of 'ora.crf' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.diskmon' on 'var01vm03'
succeeded
CRS-2676: Start of 'ora.cssd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on
'var01vm03'
CRS-2676: Start of 'ora.ctssd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'var01vm03'
CRS-2676: Start of 'ora.asm' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.storage' on
'var01vm03'
CRS-2676: Start of 'ora.storage' on 'var01vm03'
succeeded
CRS-2672: Attempting to start 'ora.crsd' on
'var01vm03'
CRS-2676: Start of 'ora.crsd' on 'var01vm03' succeeded
CRS-6023: Starting Oracle Cluster Ready
Services-managed resources
CRS-6017: Processing resource auto-start for servers:
var01vm03
CRS-2672: Attempting to start 'ora.scan3.vip' on
'var01vm03'
CRS-2672: Attempting to start 'ora.var01vm03.vip' on
'var01vm03'
CRS-2672: Attempting to start 'ora.scan1.vip' on
'var01vm03'
CRS-2672: Attempting to start 'ora.scan2.vip' on
'var01vm03'
CRS-2672: Attempting to start 'ora.MGMTLSNR' on
'var01vm03'
CRS-2672: Attempting to start 'ora.ons' on 'var01vm03'
CRS-2676: Start of 'ora.MGMTLSNR' on 'var01vm03'
succeeded
CRS-2676: Start of 'ora.var01vm03.vip' on 'var01vm03'
succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on
'var01vm03'
CRS-2676: Start of 'ora.scan3.vip' on 'var01vm03'
succeeded
CRS-2672: Attempting to start
'ora.LISTENER_SCAN3.lsnr' on 'var01vm03'
CRS-2676: Start of 'ora.scan1.vip' on 'var01vm03'
succeeded
CRS-2672: Attempting to start
'ora.LISTENER_SCAN1.lsnr' on 'var01vm03'
CRS-2676: Start of 'ora.scan2.vip' on 'var01vm03'
succeeded
CRS-2672: Attempting to start
'ora.LISTENER_SCAN2.lsnr' on 'var01vm03'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.ons' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on
'var01vm03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on
'var01vm03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'var01vm03'
succeeded
CRS-6016: Resource auto-start has completed for server
var01vm03
CRS-6024: Completed start of Oracle Cluster Ready
Services-managed resources
CRS-4123: Oracle High Availability Services has been
started.
2018/12/20 12:22:43 CLSRSC-343: Successfully started
Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific
configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/12/20 12:23:00 CLSRSC-595: Executing upgrade step
18 of 19: 'UpgradeNode'.
2018/12/20 12:23:03 CLSRSC-474: Initiating upgrade of
resource types
2018/12/20 12:23:33 CLSRSC-475: Upgrade of resource
types successfully initiated.
Start upgrade invoked..
2018/12/20 12:23:34 CLSRSC-478: Setting Oracle
Clusterware active version on the last node to be upgraded
2018/12/20 12:23:34 CLSRSC-482: Running command:
'/u01/app/18c/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle
Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle
Clusterware.
Oracle Clusterware active version was successfully set
to 18.0.0.0.0.
2018/12/20 12:24:36 CLSRSC-479: Successfully set
Oracle Clusterware active version
2018/12/20 12:24:36 CLSRSC-476: Finishing upgrade of
resource types
2018/12/20 12:24:37 CLSRSC-477: Successfully completed
upgrade of resource types
2018/12/20 12:25:42 CLSRSC-595: Executing upgrade step
19 of 19: 'PostUpgrade'.
2018/12/20 12:25:43 CLSRSC-476: Finishing upgrade of
resource types
2018/12/20 12:25:44 CLSRSC-477: Successfully completed
upgrade of resource types
2018/12/20 12:25:49 CLSRSC-325: Configure Oracle Grid
Infrastructure for a Cluster ... succeeded