Monday, December 31, 2018

Upgrade GI from 12.2 to 18.4 on Virtual Exadata. Part 2.

Part 1: http://exadata-dba.blogspot.com/2018/12/upgrade-gi-from-122-to-184-on-virtual.html

1.
18.1.0.0 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.4 and later on Oracle Linux (Doc ID 2369422.1)
Patches to apply before upgrading Oracle GI and DB to 18c or downgrading to previous release (Doc ID 2414935.1)

2.
Download GI home and appropriate files:  GI base release, latest opatch, GI RU (p28689122_184000_Linux-x86-64.zip).

3.
Create a new Oracle Grid Infrastructure Oracle home. In this installation I suppose that GI files were unzipped at previous stem as root user in /u01/app/18c/grid file system, therefore we need to login to VM and chown -R oracle:oinstall /u01/app/18c/grid. In this environment GI and RDBMS software are under oracle user, so we have to change ownership of GI to  oracle:oinstall. Usually I prefer to have separate ownership: grid user for GI and oracle user for RDBMS , in this case you should chown grid:oinstall $GI_HOME.
As root@ DomU:

# mkdir -p /u01/app/18c/grid
# cat /etc/fstab
# echo "/dev/xvdg                /u01/app/18c/grid          ext4   defaults     1 1" >>/etc/fstab
# cat /etc/fstab
# mount /u01/app/18c/grid
# chown -R oracle:oinstall /u01/app/18c/grid

4.
Next step is to renew opatch software in GI home. The last opatch version for today is 16, and I renewed opatch simply unzipping it into GI home:
# su - oracle
$ unzip /store/p6880880_180000_Linux-x86-64.zip –d /u01/app/18c/grid
Archive:  /u01/opatch/p6880880_180000_Linux-x86-64.zip

replace /u01/app/18c/grid/OPatch/opatchprereqs/oui/knowledgesrc.xml? [y]es, [n]o, [A]ll, [N]one, [r]ename: A

and answer A (overwrite allways).
The same opatch file is suit for GI and RDBMS homes.

5.
Next step is to apply latest GI RU patch (18.4 in my case) to the base release (18.3 in my case):

$ cd /u01/app/18c/grid
$ ./gridSetup.sh -silent -applyPSU /store/tmpEXAPSU/28689122/28659165
At this step I patched the base release binaries. Patch 18.4 adds about 2,5g to the GI_HOME.

6.
Next step is to edit the $GI_HOME/install/response/gridsetup.rsp file. I changed 2 empty lines. Before edit:
oracle.install.option=
ORACLE_BASE=
After edit:
oracle.install.option=UPGRADE
ORACLE_BASE=/u01/app/oracle

7.
The final step consist of run gridSetup.sh script.
$ ./gridSetup.sh  -silent -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

In my 1st run
$ ./gridSetup.sh  -silent -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

I obtained the failed message:
[FATAL] [INS-13019] Some mandatory prerequisites are not met. These prerequisites cannot be ignored.
   ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/GridSetupActions2018-12-20_11-51-23AM/gridSetupActions2018-12-20_11-51-23AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The log file say that “ORACLE_BASE directory /u01/app/oracle is not writable”. I double checked writeability of ORACLE_BASE = /u01/app/oracle and found it writable. So I decided ignore this fatal error with –skipPrereqs option:
$ ./gridSetup.sh  -silent -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp -skipPrereqs
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at: /u01/app/18c/grid/install/response/grid_2018-12-20_11-55-36AM.rsp

You can find the log of this install session at: /u01/app/oraInventory/logs/GridSetupActions2018-12-20_11-55-36AM/gridSetupActions2018-12-20_11-55-36AM.log

As a root user, execute the following script(s):
        1. /u01/app/18c/grid/rootupgrade.sh

Execute /u01/app/18c/grid/rootupgrade.sh on the following nodes:
[var01vm03]

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/app/18c/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp [-silent]

The gridSetup.sh completed successfully, so we should run next 2 steps:
1.  /u01/app/18c/grid/rootupgrade.sh
2.        gridSetup.sh –executeConfigTools …

8.
Next error becomes from roorupgrade.sh script:
[root@VM03]# /u01/app/18c/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/18c/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/18c/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/var01vm03/crsconfig/rootcrs_var01vm03_2018-12-20_12-04-43AM.log
2018/12/20 12:04:45 CLSRSC-697: Failed to get the value of environment variable 'TZ' from the environment file '/u01/app/12.1.0.2/grid/crs/install/s_crsconfig_var01vm03_env.txt'
Died at /u01/app/18c/grid/crs/install/crsutils.pm line 17076.

There is no 12.1 GI in this configuration. It was removed a half year ago. So the simple solution – find tis file and temporary copy appropriate file to appropriate place:
[root@VM03]# find /u01 – name s_crsconfig_var01vm03_env.txt
/u01/app/12.2.0.1/grid/crs/install/s_crsconfig_var01vm03_env.txt

[root@VM03]# mkdir -p /u01/app/12.1.0.2/grid/crs/install/
[root@VM03]# cp /u01/app/12.2.0.1/grid/crs/install/s_crsconfig_var01vm03_env.txt /u01/app/12.1.0.2/grid/crs/install/
[root@VM03]# chown -R oracle:oinstall /u01/app/12.1.0.2

And run rootupgrade.sh one more time:
[root@VM03]# /u01/app/18c/grid/rootupgrade.sh

9.
Next step is to run executeConfigTools :
[oracle@VM03]$ /u01/app/18c/grid/gridSetup.sh -executeConfigTools -responseFile /u01/app/18c/grid/install/response/gridsetup.rsp -silent

10. MGMT DB
After upgrade we noticed that there is no MGMT DB in our GI 18.4. The investigation show that there were no MGMT DB in the previous 12.2 GI. So, the upgrade 12.2.0.1 -> 18c completed successfully while absence MGMT DB.
How to add the MGMT DB to GI 18.4:
/u01/app/18c/grid/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName DATAC8
-datafileJarLocation /u01/app/18c/grid/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords –skipUserTemplateCheck

$ /u01/app/18c/grid/bin/mgmtca –local


-------------------------------------------------------------------------------------
The log of rootupgrade.sh script, all 19 steps:

Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/18c/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/18c/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/oracle/crsdata/var01vm03/crsconfig/rootcrs_var01vm03_2018-12-20_12-14-20AM.log
2018/12/20 12:14:32 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/12/20 12:14:32 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/12/20 12:15:20 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/12/20 12:15:20 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/12/20 12:15:24 CLSRSC-595: Executing upgrade step 3 of 19: 'GetOldConfig'.
2018/12/20 12:15:24 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/12/20 12:15:28 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2018/12/20 12:16:59 CLSRSC-693: CRS entities validation completed successfully.
2018/12/20 12:17:02 CLSRSC-515: Starting OCR manual backup.
2018/12/20 12:17:08 CLSRSC-516: OCR manual backup successful.
2018/12/20 12:17:13 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/12/20 12:17:13 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2018/12/20 12:17:13 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2018/12/20 12:17:13 CLSRSC-615:
 3. The last node to downgrade cannot be a Leaf node.
2018/12/20 12:17:16 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/12/20 12:17:16 CLSRSC-595: Executing upgrade step 4 of 19: 'GenSiteGUIDs'.
2018/12/20 12:17:22 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/12/20 12:17:24 CLSRSC-363: User ignored prerequisites during installation
2018/12/20 12:17:31 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/12/20 12:17:36 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/12/20 12:17:36 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2018/12/20 12:18:34 CLSRSC-470: Starting non-rolling migration of Oracle ASM
2018/12/20 12:18:34 CLSRSC-482: Running command: '/u01/app/18c/grid/bin/asmca -silent -upgradeNodeASM -nonRolling true -oldCRSHome /u01/app/12.2.0.1/grid -oldCRSVersion 12.2.0.1.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2018/12/20 12:18:41 CLSRSC-471: Successfully initiated non-rolling migration of Oracle ASM
2018/12/20 12:18:43 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/12/20 12:19:25 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/12/20 12:19:28 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/12/20 12:19:28 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/12/20 12:19:35 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/12/20 12:19:35 CLSRSC-595: Executing upgrade step 12 of 19: 'UpgradeAFD'.
2018/12/20 12:19:40 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/12/20 12:19:44 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/12/20 12:19:45 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
2018/12/20 12:20:16 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'var01vm03'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'var01vm03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/12/20 12:20:54 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/12/20 12:21:18 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'var01vm03'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'var01vm03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'var01vm03'
CRS-2672: Attempting to start 'ora.mdnsd' on 'var01vm03'
CRS-2676: Start of 'ora.mdnsd' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.evmd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'var01vm03'
CRS-2676: Start of 'ora.gpnpd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'var01vm03'
CRS-2676: Start of 'ora.gipcd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'var01vm03'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'var01vm03'
CRS-2676: Start of 'ora.cssdmonitor' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'var01vm03'
CRS-2672: Attempting to start 'ora.diskmon' on 'var01vm03'
CRS-2676: Start of 'ora.crf' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.diskmon' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.cssd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.ctssd' on 'var01vm03'
CRS-2676: Start of 'ora.ctssd' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'var01vm03'
CRS-2676: Start of 'ora.asm' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'var01vm03'
CRS-2676: Start of 'ora.storage' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'var01vm03'
CRS-2676: Start of 'ora.crsd' on 'var01vm03' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: var01vm03
CRS-2672: Attempting to start 'ora.scan3.vip' on 'var01vm03'
CRS-2672: Attempting to start 'ora.var01vm03.vip' on 'var01vm03'
CRS-2672: Attempting to start 'ora.scan1.vip' on 'var01vm03'
CRS-2672: Attempting to start 'ora.scan2.vip' on 'var01vm03'
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'var01vm03'
CRS-2672: Attempting to start 'ora.ons' on 'var01vm03'
CRS-2676: Start of 'ora.MGMTLSNR' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.var01vm03.vip' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'var01vm03'
CRS-2676: Start of 'ora.scan3.vip' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'var01vm03'
CRS-2676: Start of 'ora.scan1.vip' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'var01vm03'
CRS-2676: Start of 'ora.scan2.vip' on 'var01vm03' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'var01vm03'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.ons' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'var01vm03' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'var01vm03' succeeded
CRS-6016: Resource auto-start has completed for server var01vm03
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/12/20 12:22:43 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/12/20 12:23:00 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2018/12/20 12:23:03 CLSRSC-474: Initiating upgrade of resource types
2018/12/20 12:23:33 CLSRSC-475: Upgrade of resource types successfully initiated.
Start upgrade invoked..
2018/12/20 12:23:34 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2018/12/20 12:23:34 CLSRSC-482: Running command: '/u01/app/18c/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 18.0.0.0.0.
2018/12/20 12:24:36 CLSRSC-479: Successfully set Oracle Clusterware active version
2018/12/20 12:24:36 CLSRSC-476: Finishing upgrade of resource types
2018/12/20 12:24:37 CLSRSC-477: Successfully completed upgrade of resource types
2018/12/20 12:25:42 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/12/20 12:25:43 CLSRSC-476: Finishing upgrade of resource types
2018/12/20 12:25:44 CLSRSC-477: Successfully completed upgrade of resource types
2018/12/20 12:25:49 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Sunday, December 23, 2018

Upgrade GI from 12.2 to 18.4 on Virtual Exadata. Part 1: Prepare new file system


In order to upgrade GI we need to do:
- take the note 2111010.1.
- prepare the new file system for 18.4 GI binaries and
- run $GI/gridSerup.sh


0. Read the documentation

The best note for this operation is 2111010.1.
12.2 Grid Infrastructure and Database Upgrade steps for Exadata Database Machine running 11.2.0.3 and later on Oracle Linux (Doc ID 2111010.1)

 


1. Create new file system


Go steps described below. I demonstrate this steps creating new file system for GI 18c home on the VM09 (9th guest VM). Login to Dom0 where VM09 lives:


[root@Dom0 ~]# cd /EXAVMIMAGES/

[root@Dom0 EXAVMIMAGES]# ll
drwxr-xr-x 2 root root        3896 Aug 23 11:04 conf
-rw-r----- 1 root root 53687091200 Aug  1 16:38 db-klone-Linux-x86-64-12102170418.50.iso
-rw-r----- 1 root root          85 Aug  1 16:38 db-klone-Linux-x86-64-12102170418.50.md5
-rw-r--r-- 1 root root  4169738842 Aug  1 16:29 db-klone-Linux-x86-64-12102170418.zip
-rw-r----- 1 root root 53687091200 Aug  1 16:36 grid-klone-Linux-x86-64-12201180717.50.iso
-rw-r----- 1 root root          87 Aug  1 16:36 grid-klone-Linux-x86-64-12201180717.50.md5
-rw-r--r-- 1 root root  5010176802 Aug  1 16:29 grid-klone-Linux-x86-64-12201180717.zip
drwxr----- 4 root root        3896 Aug 23 11:06 GuestImages
-rw-r----- 1 root root 26843545600 Jun 13  2018 System.first.boot.18.1.5.0.0.180506.img

[root@Dom0 EXAVMIMAGES]# qemu-img create /EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso 50G
Formatting '/EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso', fmt=raw size=53687091200

[root@Dom0 EXAVMIMAGES]# parted /EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso mklabel gpt

[root@Dom0 EXAVMIMAGES]# losetup -f
/dev/loop8

[root@Dom0 EXAVMIMAGES]# losetup /dev/loop8 /EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso

[root@Dom0 EXAVMIMAGES]# parted -s /dev/loop8 unit s print
Model:  (file)
Disk /dev/loop8: 104857600s
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start  End  Size  File system  Name  Flags

[root@Dom0 EXAVMIMAGES]# parted -s /dev/loop8 mkpart primary 64s 104857566s set 1
Warning: The resulting partition is not properly aligned for best performance.

[root@Dom0 EXAVMIMAGES]# mkfs -t ext4 -b 4096 /dev/loop8
mke2fs 1.43-WIP (20-Jun-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@Dom0 EXAVMIMAGES]# /sbin/tune2fs -c 0 -i 0 /dev/loop8
tune2fs 1.43-WIP (20-Jun-2013)
Setting maximal mount count to -1
Setting interval between checks to 0 seconds

[root@Dom0 EXAVMIMAGES]# losetup -d /dev/loop8
[root@Dom0 EXAVMIMAGES]# sync


2. Put the binaries to this file system
In this step we put GI binaries into new FS. We do it as root@Dom0 user.
It is not a mistake. At next step (after login to the VM i'll do 
# chowner oracle:oinstall $GI_HOME ). You may ommit this step and unzip GI distr later as oracle@VM OS user.
[root@Dom0 EXAVMIMAGES]# mkdir -p /mnt/grid-klone-Linux-x86-64-18c_vm09
[root@Dom0 EXAVMIMAGES]# mount -o loop /EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso /mnt/grid-klone-Linux-x86-64-18c_vm09

Unzip GI distr to appropriate FS:
[root@Dom0 EXAVMIMAGES]# ls /EXAVMIMAGES/18c/LINUX.X64_18.3_grid_home.zip
/EXAVMIMAGES/18c/LINUX.X64_18.3_grid_home.zip
[root@Dom0 EXAVMIMAGES]# unzip -q -d /mnt/grid-klone-Linux-x86-64-18c_vm09 /EXAVMIMAGES/18c/LINUX.X64_18.3_grid_home.zip
[root@Dom0 EXAVMIMAGES]# umount /mnt/grid-klone-Linux-x86-64-18c_vm09
[root@Dom0 EXAVMIMAGES]# rm -rf /mnt/grid-klone-Linux-x86-64-18c_vm09


3. Attach new file system to guest VM (execute as root@Dom0):
  
[root@Dom0 EXAVMIMAGES]# ls /EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso
/EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso

[root@Dom0 EXAVMIMAGES]# ls -l /EXAVMIMAGES/GuestImages/
total 8
drwxr----- 2 root root 3896 Nov 25 16:41 mradm01vm08.moex.com
drwxr----- 2 root root 3896 Nov 25 16:39 mradm01vm09.moex.com

[root@Dom0 EXAVMIMAGES]# ls -l /EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/
total 82635782
-rw-r----- 1 root root 53687091200 Dec 21 12:58 db12.1.0.2.170418-3.img
-rw-r----- 1 root root 53687091200 Dec 21 12:58 grid12.2.0.1.180717.img
-rw-r----- 1 root root        2642 Aug 23 11:06 mradm01vm09.moex.com.cell.b0e1c27d1da94115b9344cd72290ed9d.conf
-rw-r----- 1 root root        4363 Aug 23 11:06 mradm01vm09.moex.com.virtualmachine.b0e1c27d1da94115b9344cd72290ed9d.conf
-rw-r----- 1 root root 66571993088 Dec 21 12:58 pv1_vgexadb.img
-rw-r----- 1 root root 26843545600 Dec 21 12:58 System.img
-rw-r----- 1 root root        2292 Nov 25 16:39 vm.cfg

[root@Dom0 EXAVMIMAGES]# reflink /EXAVMIMAGES/grid-klone-Linux-x86-64-18c_vm09.iso /EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/grid18c_vm09.img

[root@Dom0 EXAVMIMAGES]# ls /EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/grid18c_vm09.img
/EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/grid18c_vm09.img


Login to VM (DomU) and determine unused disk device name on this VM
[root@VM]# lsblk -id

[root@VM ~]# lsblk -id
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0    0  25G  0 disk
xvdb 202:16   0  50G  0 disk /u01/app/12.2.0.1/grid
xvdc 202:32   0  50G  0 disk /u01/app/oracle/product/12.1.0.2/dbhome_1
xvdd 202:48   0  62G  0 disk

The next free device name is "xvde"

Return again at Dom0 and attach block device to domU
[root@Dom0 EXAVMIMAGES]# xm block-attach mradm01vm09.moex.com file:/EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/grid18c_vm09.img /dev/xvde w

[root@Dom0 EXAVMIMAGES]# grep ^uuid /EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/vm.cfg
uuid = 'b0e1c27d1da94115b9344cd72290ed9d'

[root@Dom0 EXAVMIMAGES]# uuidgen | tr -d '-'
fb281ca57229494aa886d83d3a4fa5f2

[root@Dom0 EXAVMIMAGES]# ls /EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/grid18c_vm09.img
/EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/grid18c_vm09.img

[root@Dom0 EXAVMIMAGES]# /OVS/Repositories/
46baa46cf2bb40718892dc41f50c7b1e/ 4f8f2aa8b83842da98b8d929691c8438/ a379bfb9e50b4038a4386bccf38ebc2c/ b0e1c27d1da94115b9344cd72290ed9d/

[root@Dom0 EXAVMIMAGES]# ln -sf /EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/grid18c_vm09.img /OVS/Repositories/b0e1c27d1da94115b9344cd72290ed9d/VirtualDisks/fb281ca57229494aa886d83d3a4fa5f2.img

[root@Dom0 EXAVMIMAGES]# cd /EXAVMIMAGES/GuestImages/mradm01vm09.moex.com/

[root@Dom0 mradm01vm09.moex.com]# cd ../../
18c/                                        db-klone-Linux-x86-64-12102170418.zip       grid-klone-Linux-x86-64-18c_vm08.iso        System.first.boot.18.1.5.0.0.180506.img
conf/                                       grid-klone-Linux-x86-64-12201180717.50.iso  grid-klone-Linux-x86-64-18c_vm09.iso
db-klone-Linux-x86-64-12102170418.50.iso    grid-klone-Linux-x86-64-12201180717.50.md5  GuestImages/
db-klone-Linux-x86-64-12102170418.50.md5    grid-klone-Linux-x86-64-12201180717.zip     lost+found/

[root@Dom0 mradm01vm09.moex.com]# cd ../mradm01vm09.moex.com/
[root@Dom0 mradm01vm08.moex.com]# cp vm.cfg vm.cfg.20181221

Edit vm.cfg and add new line for new disk device (bold):
[root@Dom0 mradm01vm08.moex.com]# vi vm.cfg 

[root@Dom0 mradm01vm09.moex.com]# cat vm.cfg
acpi = 1
apic = 1
pae = 1
builder = 'hvm'
kernel = '/usr/lib/xen/boot/hvmloader'
device_model = '/usr/lib/xen/bin/qemu-dm'
# To make VMs with more than 12 vCPUs work on exadata server
#   1: Processor Info and Feature Bits
#   This returns the CPU's stepping, model, and family information in EAX (also called the signature of a CPU),
#   feature flags in EDX and ECX, and additional feature info in EBX.
#   The format of the information in EAX is as follows:
#     3:0   - Stepping
#     7:4   - Model
#     11:8  - Family
#     13:12 - Processor Type
#     19:16 - Extended Model
#     27:20 - Extended Family
# Each register has 32 bits with 31st bit in the left end and 0 bit in the right end.
#   edx register:
#     12 bit - Memory Type Range Registers. Force to 0 set uncached access mode to memory ranges.
# Each successive character represent a lesser-significant bit:
#     '1' -> force the corresponding bit to 1
#     '0' -> force to 0
#     'x' -> Get a safe value (pass through and mask with the default policy)
#     'k' -> pass through the host bit value
#     's' -> as 'k' but preserve across save/restore and migration
#               33222222222211111111110000000000
#               10987654321098765432109876543210
cpuid = ['1:edx=xxxxxxxxxxxxxxxxxxx0xxxxxxxxxxxx']
disk = ['file:/OVS/Repositories/b0e1c27d1da94115b9344cd72290ed9d/VirtualDisks/b2c152ef76ce489197f93e2727b216c4.img,xvda,w',
'file:/OVS/Repositories/b0e1c27d1da94115b9344cd72290ed9d/VirtualDisks/e8bbc556ef914757818f60adcd502d37.img,xvdb,w',
'file:/OVS/Repositories/b0e1c27d1da94115b9344cd72290ed9d/VirtualDisks/364ee8782fea430eb2bf6a024191d0d5.img,xvdc,w',
'file:/OVS/Repositories/b0e1c27d1da94115b9344cd72290ed9d/VirtualDisks/99cb835716034a37a79e199aa56b1841.img,xvdd,w',
'file:/OVS/Repositories/b0e1c27d1da94115b9344cd72290ed9d/VirtualDisks/fb281ca57229494aa886d83d3a4fa5f2.img,xvde,w']
memory = '376829'
maxmem = '376829'
OVM_simple_name = 'Exadata VM'
name = 'mradm01vm09.moex.com'
OVM_os_type = 'Oracle Linux 6'
vcpus = 16
maxvcpus = 16
uuid = 'b0e1c27d1da94115b9344cd72290ed9d'
on_crash = 'restart'
on_reboot = 'restart'
serial = 'pty'
keymap = 'en-us'
vif = ['type=netfront,mac=00:16:3e:2c:ec:be,bridge=vmbondeth0.1443','type=netfront,mac=00:16:3e:f4:28:bf,bridge=vmeth0']
timer_mode = 2
ib_pfs = ['3b:00.0']
ib_pkeys = [{'pf':'3b:00.0','port':'1','pkey':['0xffff',]},{'pf':'3b:00.0','port':'2','pkey':['0xffff',]},]


4. Next steps will be executed on the guest VM.


We'll create mount point and mount new file system under VM.
Login to DomU
 

[root@VM]# mkdir -p /u01/app/18c/grid
 

[root@VM]# mount /dev/xvde /u01/app/18c/grid

Add the line to /etc/fstab:

/dev/xvde    /u01/app/18c/grid       ext4    defaults        1 1

Use df -h to verify new FS are mounted.
 

[root@VM]# df -h
.../dev/xvde 50G 8.0G 39G 17% /u01/app/18c/grid
 





Does DEALLOCATE UNUSED or SHRINK SPACE will free space occupied by LOB segment?

Lets check how it works. My env is DB 19.20@Linux-x64 1) I created the table with 4 LOB columns of 4 different LOB types: BASICFILE BLOB, BA...