Monday, October 1, 2018

cellcli -e list diskmap

The interesting LIST DISKMAP command in the storage cells links all views to hard disks:
PCI bus address as Name 252:5, OS name (/dev/sdh, for example), GridDisk name DATAC1_CD_05_ed01celadm10 ...

Example:

[root@ed01celadm10 ~]# cellcli -e list diskmap
Name       PhysicalSerial       SlotNumber              Status  PhysicalSize CellDisk         DevicePartition GridDisks
252:0      Q4WPNK               0                       normal  9124G        CD_00_ed01celadm10 /dev/sdc        "DATAC6_CD_00_ed01celadm10, RECOC6_CD_00_ed01celadm10"
252:1      Q56YJK               1                       normal  9124G        CD_01_ed01celadm10 /dev/sdd        "DATAC6_CD_01_ed01celadm10, RECOC6_CD_01_ed01celadm10"
252:2      Q599GK               2                       normal  9124G        CD_02_ed01celadm10 /dev/sde        "DATAC6_CD_02_ed01celadm10, RECOC6_CD_02_ed01celadm10"
252:3      Q573LK               3                       normal  9124G        CD_03_ed01celadm10 /dev/sdf        "DATAC6_CD_03_ed01celadm10, RECOC6_CD_03_ed01celadm10"
252:4      Q566UK               4                       normal  9124G        CD_04_ed01celadm10 /dev/sdg        "DATAC6_CD_04_ed01celadm10, RECOC6_CD_04_ed01celadm10"
252:5      Q575PK               5                       normal  9124G        CD_05_ed01celadm10 /dev/sdh        "DATAC6_CD_05_ed01celadm10, RECOC6_CD_05_ed01celadm10"
252:6      Q578AK               6                       normal  9124G        CD_06_ed01celadm10 /dev/sdi        "DATAC6_CD_06_ed01celadm10, RECOC6_CD_06_ed01celadm10"
252:7      Q59BYK               7                       normal  9124G        CD_07_ed01celadm10 /dev/sdj        "DATAC6_CD_07_ed01celadm10, RECOC6_CD_07_ed01celadm10"
252:8      Q551EK               8                       normal  9124G        CD_08_ed01celadm10 /dev/sdk        "DATAC6_CD_08_ed01celadm10, RECOC6_CD_08_ed01celadm10"
252:9      Q57PRK               9                       normal  9124G        CD_09_ed01celadm10 /dev/sdl        "DATAC6_CD_09_ed01celadm10, RECOC6_CD_09_ed01celadm10"
252:10     Q56NSK               10                      normal  9124G        CD_10_ed01celadm10 /dev/sdm        "DATAC6_CD_10_ed01celadm10, RECOC6_CD_10_ed01celadm10"
252:11     Q56J0K               11                      normal  9124G        CD_11_ed01celadm10 /dev/sdn        "DATAC6_CD_11_ed01celadm10, RECOC6_CD_11_ed01celadm10"
FLASH_10_1 PHLE7423009Q6P4BGN-1 "PCI Slot: 10; FDOM: 1" normal  2981G        FD_00_ed01celadm10 /dev/md310      "FLASHCACHE_FD_00_ed01celadm10, FLASHLOG_FD_00_ed01celadm10"
FLASH_10_2 PHLE7423009Q6P4BGN-2 "PCI Slot: 10; FDOM: 2" normal  2981G        FD_00_ed01celadm10 /dev/md310      "FLASHCACHE_FD_00_ed01celadm10, FLASHLOG_FD_00_ed01celadm10"
FLASH_4_1  PHLE742201RY6P4BGN-1 "PCI Slot: 4; FDOM: 1"  normal  2981G        FD_01_ed01celadm10 /dev/md304      "FLASHCACHE_FD_01_ed01celadm10, FLASHLOG_FD_01_ed01celadm10"
FLASH_4_2  PHLE742201RY6P4BGN-2 "PCI Slot: 4; FDOM: 2"  normal  2981G        FD_01_ed01celadm10 /dev/md304      "FLASHCACHE_FD_01_ed01celadm10, FLASHLOG_FD_01_ed01celadm10"
FLASH_5_1  PHLE742200MK6P4BGN-1 "PCI Slot: 5; FDOM: 1"  normal  2981G        FD_02_ed01celadm10 /dev/md305      "FLASHCACHE_FD_02_ed01celadm10, FLASHLOG_FD_02_ed01celadm10"
FLASH_5_2  PHLE742200MK6P4BGN-2 "PCI Slot: 5; FDOM: 2"  normal  2981G        FD_02_ed01celadm10 /dev/md305      "FLASHCACHE_FD_02_ed01celadm10, FLASHLOG_FD_02_ed01celadm10"
FLASH_6_1  PHLE742300C36P4BGN-1 "PCI Slot: 6; FDOM: 1"  normal  2981G        FD_03_ed01celadm10 /dev/md306      "FLASHCACHE_FD_03_ed01celadm10, FLASHLOG_FD_03_ed01celadm10"
FLASH_6_2  PHLE742300C36P4BGN-2 "PCI Slot: 6; FDOM: 2"  normal  2981G        FD_03_ed01celadm10 /dev/md306      "FLASHCACHE_FD_03_ed01celadm10, FLASHLOG_FD_03_ed01celadm10"




Tuesday, August 14, 2018

Exadata account locked, pam_tally2 and host_access_control

Exam accounts are set up so that they are blocked for 10 minutes after the first wrong password entered. This brings a lot of inconvenience to users.

 /sbin/pam_tally2
 The pam_tally2 is login enabler utility.For example i will connect to oracle user with wrong password and flush the blocker.

[root@z01dbadm01 ~]# pam_tally2 
                                                   <--- emply output here, so no locked account

[root@z01dbadm01 ~]# ssh z01dbadm01 -l oracle
oracle@z01dbadm01's password:    <--- wrong password here
Permission denied, please try again.


After unsuccessful attempt to login you'll see:

[root@z01dbadm01 ~]# pam_tally2
Login           Failures Latest failure     From
oracle              1    08/14/18 17:17:02  z01dbadm01.distr.fors.ru
 

Remove the lock:
[root@z01dbadm01 ~]# pam_tally2 -u oracle -r
Login           Failures Latest failure     From
oracle              1    08/14/18 17:17:02  z01dbadm01.distr.fors.ru
 

[root@z01dbadm01 ~]# pam_tally2 

Empty output = the login is allowed.                           





[root@z01dbadm01 ~]# chage -l oracle
Last password change                              : Jun 05, 2018
Password expires                                  : Sep 03, 2018
Password inactive                                 : never
Account expires                                   : never
Minimum number of days between password change    : 1
Maximum number of days between password change    : 90
Number of days of warning before password expires : 7



[root@z01dbadm01 ~]# chage -I -1 -m 0 -M 99999 -E -1 oracle
[root@z01dbadm01 ~]# chage -l oracle
Last password change                              : Jun 05, 2018
Password expires                                  : never
Password inactive                                 : never
Account expires                                   : never
Minimum number of days between password change    : 0
Maximum number of days between password change    : 99999
Number of days of warning before password expires : 7






HOST_ACCESS_CONTROL
Here is extract from host_access_control.log (one of Exadata installation log files ).
I edited this file and left lines related to Linux config files. You can see security changes the host_access_control make inside Linux :

Restored Exadata Host Access Control rules to /etc/exadata/security/exadata-access.conf

Setting the SSH Server supported ciphers to arcfour,aes128-ctr,aes192-ctr,aes256-ctr
Setting Ciphers arcfour,aes128-ctr,aes192-ctr,aes256-ctr in /etc/ssh/sshd_config

Setting the SSH Client supported ciphers to arcfour,aes128-ctr,aes192-ctr,aes256-ctr
Setting Ciphers arcfour,aes128-ctr,aes192-ctr,aes256-ctr in /etc/ssh/ssh_config

Shell timeout (TMOUT) set to 14400 in /etc/profile
ClientAliveCountMax set to 0 in /etc/ssh/sshd_config
ClientAliveInterval set to 86400 in /etc/ssh/sshd_config
Restored ILOM CLI TIMEOUT to 15
Restored Exadata Host Access Control rules to /etc/exadata/security/exadata-access.conf
pam_tally2 deny set to 5 in /etc/pam.d/login
pam_tally2 deny set to 5 in /etc/pam.d/sshd
pam_tally2 lock_time set to 600 in /etc/pam.d/login
pam_tally2 lock_time set to 600 in /etc/pam.d/sshd
pam_passwdqc.so min set to 5,5,5,5,5 in /etc/pam.d/password-auth and /etc/pam.d/system-auth
pam_unix.so remember set to 10 in /etc/pam.d/password-auth and /etc/pam.d/system-auth
Restored aging parameters [ -M 99999, -m 0, -W 7 ] for user root
Restored aging parameters [ -M 99999, -m 0, -W 7 ] for user dbmsvc
Restored aging parameters [ -M 99999, -m 0, -W 7 ] for user dbmadmin
Restored aging parameters [ -M 99999, -m 0, -W 7 ] for user dbmmonitor
Setting PASS_MAX_DAYS 90 in /etc/login.defs
Setting PASS_MIN_DAYS 1 in /etc/login.defs
Setting PASS_MIN_LEN 8 in /etc/login.defs
Setting PASS_WARN_AGE 7 in /etc/login.defs
Setting PermitRootLogin yes in /etc/ssh/sshd_config
Setting PasswordAuthentication yes in /etc/ssh/sshd_config

/opt/oracle.cellos/host_access_control

The host_access_control (undocumented utility), is the only permitted and supported method to implement security configuration changes on the Oracle Exadata Storage Servers.
Customers are not permitted to make manual changes to the configuration of these devices per Oracle Support notice 1068804.1.
Further, before using this tool, customers must first obtain explicit approval from Oracle Product Development to change the security configuration of their Oracle Exadata Storage Servers.
To request this approval, customers must open a service request with Oracle Support.


  /opt/oracle.cellos/host_access_control --help
Usage: [-q|--quiet] command [argument]
     command is one of:
     access           - User access from hosts, networks, etc.
     access-ilomweb   - Control overall access from the ILOM Web Remote Console device (tty1)
     access-export    - Export access rules to a file
     access-import    - Import access rules via a supplied file
     audit-rules      - Import audit rules via a supplied file
     banner           - Login banner management
     fips-mode        - FIPS mode for openSSH
     grub-password    - GRUB password control
     idle-timeout     - Shell and SSH client idle timeout control
     ilom-configure   - ILOM settings control
     ilom-password    - ILOM root user password control
     kernel-dump      - kdump (kernel dump file creation) control
     maint-password   - Diagnostic ISO shell and Rescue password control
     pam-auth         - PAM authentication settings: pam_tally2 deny and lock_time, passwdqc, and password history values
     password-aging   - Adjust current users' password aging
     password-policy  - Adjust the system's password age policies
     rootssh          - Root user SSH access control
     sshciphers       - SSH cipher support control
     ssh-listen       - Control the SSHD service optional ListenAddress entries
     ssh-service      - Control the SSHD service and active connections
     sudo             - User privilege control through sudo
     sudodeny         - Manage the Exadata sudo users deny list
     get-runtime      - Maintenance command: import system configuration settings, storing them in host_access_control parameter settings files.
     restore          - Maintenance command: reapply settings previously set by this utility, as in after an upgrade
     (command help by using --help after command (no help with restore command))
     The optional -q|--quiet option is used for silent/noprompting for use with cellcli and must be the first arg.
--------------------------------------------------------------


[root@ed04dbadm01 ~]# /opt/oracle.cellos/host_access_control pam-auth --status

[2018-04-20 16:55:22 +0300] [INFO] [IMG-SEC-0801] Deny on login failure count is deny=5
[2018-04-20 16:55:22 +0300] [INFO] [IMG-SEC-0802] Account lock-out time is lock_time=600
[2018-04-20 16:55:22 +0300] [INFO] [IMG-SEC-0803] Password strength, passwdqc setting is min=5,5,5,5,5
[2018-04-20 16:55:22 +0300] [INFO] [IMG-SEC-0804] Password history depth setting is remember=10

[root@ed04dbadm01 ~]# /opt/oracle.cellos/host_access_control pam-auth -d 10
[2018-04-20 16:56:43 +0300] [INFO] [IMG-SEC-0805] Deny on login failure count set to 10

[root@ed04dbadm01 ~]# /opt/oracle.cellos/host_access_control pam-auth -d 20
[2018-04-20 16:56:51 +0300] [WARNING] [IMG-SEC-0023] Incorrect value for option Integer value for deny option must be between 1 and 10

[root@ed04dbadm01 ~]# /opt/oracle.cellos/host_access_control pam-auth --lock 0
[2018-04-20 16:57:16 +0300] [INFO] [IMG-SEC-0806] Account lock_time after one failed login attempt set to 0

[root@ed04dbadm01 ~]# /opt/oracle.cellos/host_access_control pam-auth --status
[2018-04-20 16:57:33 +0300] [INFO] [IMG-SEC-0801] Deny on login failure count is deny=10
[2018-04-20 16:57:33 +0300] [INFO] [IMG-SEC-0802] Account lock-out time is lock_time=0
[2018-04-20 16:57:33 +0300] [INFO] [IMG-SEC-0803] Password strength, passwdqc setting is min=5,5,5,5,5
[2018-04-20 16:57:33 +0300] [INFO] [IMG-SEC-0804] Password history depth setting is remember=10





Monday, August 13, 2018

"Found 3 configured voting files but 2 voting files are required" after upgrade to 12.2.0.1.180717

After upgrade Grid Infrastructure from 12.1 to 12.2 we obtained the message:

CRS-1705: Found 3 configured voting files but 2 voting files are required, terminating to ensure data integrity; details at (:CSSNM00021:) in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc


The all steps of upgrade were successful. There were no errors on the screen and in the log files.
No database soft was touched, no OS software was touched, the only GI was upgraded.
But all 4 virtual machines upgraded from 12.1 to 12.2.0.1.180817 were in bad state: ClusterWare not started automatically. The Linux restart did not help.

 Previous upgrade of 5th VM few day ago was successful and we obtained the working configuration.

The CW alert log is:
[CSSDMONITOR(121305)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 121305
[CSSDAGENT(121338)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 121338
[OHASD(224984)]CRS-2878: Failed to restart resource 'ora.storage'
[ORAROOTAGENT(228831)]CRS-5021: Check of storage failed: details at "(:CLSN00117:)" in "/u01/app/oracle/diag/crs/mr01vm03/crs/trace/ohasd_orarootagent_root.trc"
[CSSDAGENT(123152)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 123152
[OCSSD(123165)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 123165
[OCSSD(123165)]CRS-1713: CSSD daemon is started in hub mode
[OCSSD(123165)]CRS-1705: Found 3 configured voting files but 2 voting files are required, terminating to ensure data integrity; details at (:CSSNM00021:) in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc
[OCSSD(123165)]CRS-1656: The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc
[OCSSD(123165)]CRS-1652: Starting clean up of CRSD resources.
[OCSSD(123165)]CRS-1653: The clean up of the CRSD resources failed.
[OCSSD(123165)]CRS-8503: Oracle Clusterware process OCSSD with operating system process ID 123165 experienced fatal signal or exception code 6.
Errors in file /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc  (incident=729): CRS-8503 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/crs/mr01vm03/crs/incident/incdir_729/ocssd_i729.trc


Very strange message: "Found 3 configured voting files but 2 voting files are required,"

SOLUTION:  move voting files to other ASM diskgroup.

The commands log:

[root@mr01vm03 ~]# cd /u01/app/12.2.0.1/grid/bin/
[root@mr01vm03 bin]# ./crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.crf' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.diskmon' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.evmd' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'mr01vm03'
CRS-2677: Stop of 'ora.drivers.acfs' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.crf' on 'mr01vm03' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'mr01vm03'
CRS-2677: Stop of 'ora.gpnpd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.cssdmonitor' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'mr01vm03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'mr01vm03' has completed
CRS-4133: Oracle High Availability Services has been stopped.
 

[root@mr01vm03 bin]# ./crsctl start crs -excl
 
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.evmd' on 'mr01vm03'
CRS-2672: Attempting to start 'ora.mdnsd' on 'mr01vm03'
CRS-2676: Start of 'ora.mdnsd' on 'mr01vm03' succeeded
CRS-2676: Start of 'ora.evmd' on 'mr01vm03' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'mr01vm03'
CRS-2676: Start of 'ora.gpnpd' on 'mr01vm03' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'mr01vm03'
CRS-2672: Attempting to start 'ora.gipcd' on 'mr01vm03'
CRS-2676: Start of 'ora.cssdmonitor' on 'mr01vm03' succeeded
CRS-2676: Start of 'ora.gipcd' on 'mr01vm03' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'mr01vm03'
CRS-2672: Attempting to start 'ora.diskmon' on 'mr01vm03'
CRS-2676: Start of 'ora.diskmon' on 'mr01vm03' succeeded
CRS-2676: Start of 'ora.cssd' on 'mr01vm03' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'mr01vm03'
CRS-2672: Attempting to start 'ora.ctssd' on 'mr01vm03'
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'mr01vm03'
CRS-2676: Start of 'ora.crf' on 'mr01vm03' succeeded
CRS-2676: Start of 'ora.ctssd' on 'mr01vm03' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'mr01vm03' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'mr01vm03'
CRS-2676: Start of 'ora.asm' on 'mr01vm03' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'mr01vm03'
CRS-2676: Start of 'ora.storage' on 'mr01vm03' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'mr01vm03'
CRS-2676: Start of 'ora.crsd' on 'mr01vm03' succeeded





CW 12.2 log:
[OHASD(126854)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 126854
[OHASD(126854)]CRS-0714: Oracle Clusterware Release 12.2.0.1.0.
[OHASD(126854)]CRS-2112: The OLR service started on node mr01vm03.
[OHASD(126854)]CRS-1301: Oracle High Availability Service started on node mr01vm03.
[OHASD(126854)]CRS-8017: location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
Create Relation ADR_CONTROL_AUX
Create Relation DFW_PURGE
[CSSDAGENT(126987)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 126987
[ORAROOTAGENT(126973)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 126973
[CSSDMONITOR(126992)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 126992
[ORAAGENT(126983)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 126983
[ORAAGENT(127057)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 127057
[EVMD(127078)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 127078
[MDNSD(127076)]CRS-8500: Oracle Clusterware MDNSD process is starting with operating system process ID 127076
[GPNPD(127143)]CRS-8500: Oracle Clusterware GPNPD process is starting with operating system process ID 127143
Create Relation DFW_PURGE_ITEM
[CSSDMONITOR(127195)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 127195
[GIPCD(127197)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 127197
[GPNPD(127143)]CRS-2328: GPNPD started on node mr01vm03.
[CSSDAGENT(127235)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 127235
[OCSSD(127253)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 127253
[OCSSD(127253)]CRS-1713: CSSD daemon is started in hub mode
[OCSSD(127253)]CRS-1707: Lease acquisition for node mr01vm03 number 1 completed
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.5;192.168.10.6/DATAC3_CD_02_mrceladm01; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.7;192.168.10.8/DATAC3_CD_02_mrceladm02; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.9;192.168.10.10/DATAC3_CD_02_mrceladm03; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1601: CSSD Reconfiguration complete. Active nodes are mr01vm03 .
[OCSSD(127253)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation.
[OSYSMOND(127434)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 127434
[OCTSSD(127440)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 127440
[OCTSSD(127440)]CRS-2403: The Cluster Time Synchronization Service on host mr01vm03 is in observer mode.
[OCTSSD(127440)]CRS-2407: The new Cluster Time Synchronization Service reference node is host mr01vm03.
[OCTSSD(127440)]CRS-2401: The Cluster Time Synchronization Service started on host mr01vm03.
[ORAAGENT(127057)]CRS-5011: Check of resource "ora.asm" failed: details at "(:CLSN00006:)" in "/u01/app/oracle/diag/crs/mr01vm03/crs/trace/ohasd_oraagent_oracle.trc"
[CRSD(127895)]CRS-8500: Oracle Clusterware CRSD process is starting with operating system process ID 127895
[CRSD(127895)]CRS-1012: The OCR service started on node mr01vm03.


[root@mr01vm03 bin]# ./crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   5789f55a211a4fadbf851c10d7d2f56d (o/192.168.10.7;192.168.10.8/DATAC3_CD_02_mrceladm02) [DATAC3]
2. ONLINE   00a2d7ffaa294f77bf5e1941f5f3f9b3 (o/192.168.10.5;192.168.10.6/DATAC3_CD_02_mrceladm01) [DATAC3]
3. ONLINE   7179ec74f6474f96bfd0cbace76f4e6e (o/192.168.10.9;192.168.10.10/DATAC3_CD_02_mrceladm03) [DATAC3]
Located 3 voting disk(s).
 

[root@mr01vm03 bin]# ./crsctl replace votedisk +RECOC3
 

Successful addition of voting disk 2d8f371e02d54ffabf33e189ed5c42c7.
Successful addition of voting disk dbba5f46190a4f1dbf73753552e2e82b.
Successful addition of voting disk d6caa92e49234f9ebfe6fe749ea7dac9.
Successful deletion of voting disk 5789f55a211a4fadbf851c10d7d2f56d.
Successful deletion of voting disk 00a2d7ffaa294f77bf5e1941f5f3f9b3.
Successful deletion of voting disk 7179ec74f6474f96bfd0cbace76f4e6e.
Successfully replaced voting disk group with +RECOC3.
CRS-4266: Voting file(s) successfully replaced


CW 12.2. log:
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.7;192.168.10.8/RECOC3_CD_02_mrceladm02; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.5;192.168.10.6/RECOC3_CD_04_mrceladm01; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.9;192.168.10.10/RECOC3_CD_02_mrceladm03; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1604: CSSD voting file is offline: o/192.168.10.5;192.168.10.6/DATAC3_CD_02_mrceladm01; details at (:CSSNM00069:) in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1604: CSSD voting file is offline: o/192.168.10.7;192.168.10.8/DATAC3_CD_02_mrceladm02; details at (:CSSNM00069:) in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1604: CSSD voting file is offline: o/192.168.10.9;192.168.10.10/DATAC3_CD_02_mrceladm03; details at (:CSSNM00069:) in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1626: A Configuration change request completed successfully
[OCSSD(127253)]CRS-1601: CSSD Reconfiguration complete. Active nodes are mr01vm03 .

[root@mr01vm03 bin]# ./crsctl replace votedisk +DATAC3
 

Successful addition of voting disk 7493e8b7db5d4f5cbfe5a4113b260499.
Successful addition of voting disk 93a9cb21b7954fc5bf099cde51b6d71c.
Successful addition of voting disk c1026a740b244f12bf650d5727f5313d.
Successful deletion of voting disk 2d8f371e02d54ffabf33e189ed5c42c7.
Successful deletion of voting disk dbba5f46190a4f1dbf73753552e2e82b.
Successful deletion of voting disk d6caa92e49234f9ebfe6fe749ea7dac9.
Successfully replaced voting disk group with +DATAC3.
 

CW 12.2 log:
CRS-4266: Voting file(s) successfully replaced

[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.7;192.168.10.8/DATAC3_CD_02_mrceladm02; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.5;192.168.10.6/DATAC3_CD_02_mrceladm01; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1605: CSSD voting file is online: o/192.168.10.9;192.168.10.10/DATAC3_CD_02_mrceladm03; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1604: CSSD voting file is offline: o/192.168.10.7;192.168.10.8/RECOC3_CD_02_mrceladm02; details at (:CSSNM00069:) in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(127253)]CRS-1626: A Configuration change request completed successfully
[OCSSD(127253)]CRS-1601: CSSD Reconfiguration complete. Active nodes are mr01vm03 .

[root@mr01vm03 bin]# ./crsctl query css votedisk
 

##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   7493e8b7db5d4f5cbfe5a4113b260499 (o/192.168.10.7;192.168.10.8/DATAC3_CD_02_mrceladm02) [DATAC3]
2. ONLINE   93a9cb21b7954fc5bf099cde51b6d71c (o/192.168.10.5;192.168.10.6/DATAC3_CD_02_mrceladm01) [DATAC3]
3. ONLINE   c1026a740b244f12bf650d5727f5313d (o/192.168.10.9;192.168.10.10/DATAC3_CD_02_mrceladm03) [DATAC3]
Located 3 voting disk(s).
 

[root@mr01vm03 bin]# ./crsctl stop crs
 

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.crsd' on 'mr01vm03'
CRS-2677: Stop of 'ora.crsd' on 'mr01vm03' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.crf' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'mr01vm03'
CRS-2677: Stop of 'ora.drivers.acfs' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.crf' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.storage' on 'mr01vm03' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'mr01vm03'
CRS-2677: Stop of 'ora.mdnsd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.asm' on 'mr01vm03' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'mr01vm03'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'mr01vm03' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.evmd' on 'mr01vm03'
CRS-2677: Stop of 'ora.ctssd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.evmd' on 'mr01vm03' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'mr01vm03'
CRS-2677: Stop of 'ora.cssd' on 'mr01vm03' succeeded
CRS-2673: Attempting to stop 'ora.diskmon' on 'mr01vm03'
CRS-2673: Attempting to stop 'ora.gipcd' on 'mr01vm03'
CRS-2677: Stop of 'ora.gipcd' on 'mr01vm03' succeeded
CRS-2677: Stop of 'ora.diskmon' on 'mr01vm03' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'mr01vm03' has completed
 

CW 12.2 log:
CRS-4133: Oracle High Availability Services has been stopped.
[GPNPD(127143)]CRS-2329: GPNPD on node mr01vm03 shut down.
[MDNSD(127076)]CRS-5602: mDNS service stopping by request.
[MDNSD(127076)]CRS-8504: Oracle Clusterware MDNSD process with operating system process ID 127076 is exiting
[OCTSSD(127440)]CRS-2405: The Cluster Time Synchronization Service on host mr01vm03 is shutdown by user
[OCTSSD(127440)]CRS-8504: Oracle Clusterware OCTSSD process with operating system process ID 127440 is exiting
[OCSSD(127253)]CRS-1603: CSSD on node mr01vm03 has been shut down.
[OCSSD(127253)]CRS-1660: The CSS daemon shutdown has completed
[OCSSD(127253)]CRS-8504: Oracle Clusterware OCSSD process with operating system process ID 127253 is exiting
[ORAROOTAGENT(126973)]CRS-5822: Agent '/u01/app/12.2.0.1/grid/bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:1:6} in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ohasd_orarootagent_root.trc.

[root@mr01vm03 bin]# ./crsctl start crs

CRS-4123: Oracle High Availability Services has been started.


CW 12.2 log:
[OHASD(132769)]CRS-8500: Oracle Clusterware OHASD process is starting with operating system process ID 132769
[OHASD(132769)]CRS-0714: Oracle Clusterware Release 12.2.0.1.0.
[OHASD(132769)]CRS-2112: The OLR service started on node mr01vm03.
[OHASD(132769)]CRS-1301: Oracle High Availability Service started on node mr01vm03.
[OHASD(132769)]CRS-8017: location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
[ORAROOTAGENT(132854)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 132854
[CSSDAGENT(132870)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 132870
[CSSDMONITOR(132882)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 132882
[ORAAGENT(132866)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 132866
[ORAAGENT(132932)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 132932
[MDNSD(132950)]CRS-8500: Oracle Clusterware MDNSD process is starting with operating system process ID 132950
[EVMD(132954)]CRS-8500: Oracle Clusterware EVMD process is starting with operating system process ID 132954
[GPNPD(133002)]CRS-8500: Oracle Clusterware GPNPD process is starting with operating system process ID 133002
[GPNPD(133002)]CRS-2328: GPNPD started on node mr01vm03.
[GIPCD(133059)]CRS-8500: Oracle Clusterware GIPCD process is starting with operating system process ID 133059
[CSSDMONITOR(133145)]CRS-8500: Oracle Clusterware CSSDMONITOR process is starting with operating system process ID 133145
[CSSDAGENT(133161)]CRS-8500: Oracle Clusterware CSSDAGENT process is starting with operating system process ID 133161
[OCSSD(133176)]CRS-8500: Oracle Clusterware OCSSD process is starting with operating system process ID 133176
[OCSSD(133176)]CRS-1713: CSSD daemon is started in hub mode
[OCSSD(133176)]CRS-1707: Lease acquisition for node mr01vm03 number 1 completed
[OCSSD(133176)]CRS-1605: CSSD voting file is online: o/192.168.10.5;192.168.10.6/DATAC3_CD_02_mrceladm01; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(133176)]CRS-1605: CSSD voting file is online: o/192.168.10.7;192.168.10.8/DATAC3_CD_02_mrceladm02; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(133176)]CRS-1605: CSSD voting file is online: o/192.168.10.9;192.168.10.10/DATAC3_CD_02_mrceladm03; details in /u01/app/oracle/diag/crs/mr01vm03/crs/trace/ocssd.trc.
[OCSSD(133176)]CRS-1601: CSSD Reconfiguration complete. Active nodes are mr01vm03 .
[OCSSD(133176)]CRS-1720: Cluster Synchronization Services daemon (CSSD) is ready for operation.
[OCTSSD(133414)]CRS-8500: Oracle Clusterware OCTSSD process is starting with operating system process ID 133414
[OCTSSD(133414)]CRS-2403: The Cluster Time Synchronization Service on host mr01vm03 is in observer mode.
[OCTSSD(133414)]CRS-2401: The Cluster Time Synchronization Service started on host mr01vm03.
[OCTSSD(133414)]CRS-2407: The new Cluster Time Synchronization Service reference node is host mr01vm03.
[OSYSMOND(133806)]CRS-8500: Oracle Clusterware OSYSMOND process is starting with operating system process ID 133806
[CRSD(133818)]CRS-8500: Oracle Clusterware CRSD process is starting with operating system process ID 133818
[OLOGGERD(133919)]CRS-8500: Oracle Clusterware OLOGGERD process is starting with operating system process ID 133919
[CRSD(133818)]CRS-1012: The OCR service started on node mr01vm03.
[CRSD(133818)]CRS-1201: CRSD started on node mr01vm03.
[ORAAGENT(134062)]CRS-8500: Oracle Clusterware ORAAGENT process is starting with operating system process ID 134062
[ORAROOTAGENT(134088)]CRS-8500: Oracle Clusterware ORAROOTAGENT process is starting with operating system process ID 134088
[ORAAGENT(134062)]CRS-5011: Check of resource "dbm03" failed: details at "(:CLSN00007:)" in "/u01/app/oracle/diag/crs/mr01vm03/crs/trace/crsd_oraagent_oracle.trc"
[CRSD(133818)]CRS-2772: Server 'mr01vm03' has been assigned to pool 'Generic'.
[CRSD(133818)]CRS-2772: Server 'mr01vm03' has been assigned to pool 'ora.dbm03'.

[root@mr01vm03 bin]# ./crsctl stat res -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
               ONLINE  ONLINE       mr01vm03                 STABLE
ora.DATAC1.dg
               OFFLINE OFFLINE      mr01vm03                 STABLE
ora.DATAC3.dg
               ONLINE  ONLINE       mr01vm03                 STABLE
ora.LISTENER.lsnr
               ONLINE  ONLINE       mr01vm03                 STABLE
ora.RECOC1.dg
               OFFLINE OFFLINE      mr01vm03                 STABLE
ora.RECOC3.dg
               ONLINE  ONLINE       mr01vm03                 STABLE
ora.chad
               ONLINE  OFFLINE      mr01vm03                 STABLE
ora.net1.network
               ONLINE  ONLINE       mr01vm03                 STABLE
ora.ons
               ONLINE  ONLINE       mr01vm03                 STABLE
ora.proxy_advm
               OFFLINE OFFLINE      mr01vm03                 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.MGMTLSNR
      1        ONLINE  ONLINE       mr01vm03                 169.254.47.1 192.168
                                                             .10.15 192.168.10.16
                                                             ,STABLE
ora.asm
      1        ONLINE  ONLINE       mr01vm03                 Started,STABLE
ora.cvu
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.dbm03.db
      1        ONLINE  OFFLINE      mr01vm03                 STARTING
ora.mgmtdb
      1        ONLINE  OFFLINE      mr01vm03                 Instance Shutdown,ST
                                                             ARTING
ora.mr01vm03.vip
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.qosmserver
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.scan1.vip
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.scan2.vip
      1        ONLINE  ONLINE       mr01vm03                 STABLE
ora.scan3.vip
      1        ONLINE  ONLINE       mr01vm03                 STABLE
--------------------------------------------------------------------------------
[root@mr01vm03 bin]#







WARNING: Managed Standby Recovery started with REAL TIME APPLY. DELAY ... ignored


In the customer's alert.log:


WARNING: Managed Standby Recovery started with REAL TIME APPLY
  DELAY 2 minutes specified at primary ignored

This message is appeared in primary's and standby's alert logs.

In primary:
Mon Aug 13 14:46:36 2018
WARNING: Managed Standby Recovery started with REAL TIME APPLY
  DELAY 2 minutes specified at primary ignored
ARC1: Standby redo logfile selected for thread 1 sequence 187706 for destination LOG_ARCHIVE_DEST_2
Mon Aug 13 14:48:10 2018
Thread 1 advanced to log sequence 187708 (LGWR switch)
  Current log# 2 seq# 187708 mem# 0: +DATAC1/SPURSTB/ONLINELOG/group_2.redo
  Current log# 2 seq# 187708 mem# 1: +RECOC1/SPURSTB/ONLINELOG/group_2.redo



In standby:
Mon Aug 13 14:46:36 2018
WARNING: Managed Standby Recovery started with REAL TIME APPLY
  DELAY 2 minutes specified at primary ignored
RFS[1]: Selected log 5 for thread 1 sequence 187706 dbid 496990843 branch 945690334
Mon Aug 13 14:46:57 2018
Media Recovery Waiting for thread 1 sequence 187706 (in transit)
Mon Aug 13 14:47:15 2018
Archived Log entry 320928 added for thread 1 sequence 187706 ID 0x22d4655a dest 1:
Mon Aug 13 14:47:15 2018
ARC4: Archive log thread 1 sequence 187706 available in 1 minute(s)
Mon Aug 13 14:47:16 2018
Media Recovery Delayed for 1 minute(s) (thread 1 sequence 187706)
Mon Aug 13 14:48:16 2018
Media Recovery Log +RECOC1/SPUR/ARCHIVELOG/2018_08_13/thread_1_seq_187706.1487.984062811
Mon Aug 13 14:48:34 2018
WARNING: Managed Standby Recovery started with REAL TIME APPLY
  DELAY 2 minutes specified at primary ignored
RFS[1]: Selected log 5 for thread 1 sequence 187707 dbid 496990843 branch 945690334
Mon Aug 13 14:48:47 2018
Media Recovery Waiting for thread 1 sequence 187707 (in transit)
Mon Aug 13 14:49:11 2018
Archived Log entry 320929 added for thread 1 sequence 187707 ID 0x22d4655a dest 1:
Mon Aug 13 14:49:11 2018
ARC0: Archive log thread 1 sequence 187707 available in 1 minute(s)
Mon Aug 13 14:49:14 2018
Media Recovery Delayed for 1 minute(s) (thread 1 sequence 187707)


Cause: 
Starting from 12.1 physical standby run as 'Real Time Apply" by default.
If you need the DELAY X minutes then you should restart managed recovery with " USING ARCHIVED LOGFILE" clause:


ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  USING ARCHIVED LOGFILE
;












cellcli -e list diskmap

The interesting LIST DISKMAP command in the storage cells links all views to hard disks: PCI bus address as Name 252:5, OS name (/dev/sdh,...