Monday, September 16, 2019

ORA-01405: fetched column value is NULL after upgrade from 12.1 to 18c

SYMPTOMs:

After upgrade 12.1.0.2 -> 18.6 we loosed our database obtained constantly reproducable :

ORA-00604: error occurred at recursive SQL level 1
ORA-01405: fetched column value is NULL


After successful upgrade there were warning messages in alert log:

Completed: ALTER DATABASE OPEN /* db agent *//* {1:44916:15577} */
Unable to obtain current patch information due to error: 1405, ORA-01405: fetched column value is NULL
===========================================================
Dumping current patch information
===========================================================
Unable to obtain current patch information due to error: 1405
===========================================================
## jox_ujs_status: database not open read-write or Java not installed, returning FALSE in pid 258708

As you can see there is no patches in alert log and something concerning java.
Investigating the patch issue we found that all SQLs lead to ORA-1405 error:

SQL> select * from dual;
select * from dual
              *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01405: fetched column value is NULL

In the 10046 trace for "select * from dual" we obtained simple trace file:

*** 2019-09-15T11:41:35.514839+03:00
WAIT #139918613321520: nam='SQL*Net message from client' ela= 4123724 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=5429869063221
CLOSE #139918613321520:c=5,e=5,dep=0,type=1,tim=5429869063321
=====================
PARSING IN CURSOR #139918613299752 len=102 dep=1 uid=0 oct=3 lid=0 tim=5429869064742 hv=3908278695 ad='27f813db8' sqlid='68hhnz3ng76d7'
select max_iops, max_mbps, max_pmbps, latency, num_disks, additional_info  from resource_io_calibrate$
END OF STMT
PARSE #139918613299752:c=399,e=966,p=0,cr=0,cu=0,mis=1,r=0,dep=1,og=4,plh=1003572626,tim=5429869064742
EXEC  #139918613299752:c=21,e=21,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=1003572626,tim=5429869064821
FETCH #139918613299752:c=35,e=35,p=0,cr=2,cu=0,mis=0,r=0,dep=1,og=4,plh=1003572626,tim=5429869064871
STAT  #139918613299752 id=1 cnt=1 pid=0 pos=1 obj=303 op='TABLE ACCESS STORAGE FULL RESOURCE_IO_CALIBRATE$ (cr=2 pr=0 pw=0 str=1 time=34 us cost=2 size=41 card=1)'
CLOSE #139918613299752:c=56,e=56,dep=1,type=0,tim=5429869064963
=====================
PARSE ERROR #139918613303136:len=18 dep=0 uid=0 oct=3 lid=0 tim=5429869064981 err=604
select * from dual
WAIT #139918613303136: nam='Disk file operations I/O' ela= 29 FileOperation=8 fileno=0 filetype=8 obj#=-1 tim=5429869065096
WAIT #139918613303136: nam='SQL*Net break/reset to client' ela= 2 driver id=1650815232 break?=1 p3=0 obj#=-1 tim=5429869065126
WAIT #139918613303136: nam='SQL*Net break/reset to client' ela= 63 driver id=1650815232 break?=0 p3=0 obj#=-1 tim=5429869065201
WAIT #139918613303136: nam='SQL*Net message to client' ela= 1 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=5429869065217

*** 2019-09-15T11:41:37.746371+03:00
WAIT #139918613303136: nam='SQL*Net message from client' ela= 2229498 driver id=1650815232 #bytes=1 p3=0 obj#=-1 tim=5429871294743
XCTEND rlbk=0, rd_only=1, tim=5429871294873
CLOSE #139918613303136:c=6,e=7,dep=0,type=0,tim=5429871294938


Nothing that would catch on the problem:  "PARSE ERROR ... err=604"

Security triggers? VPD ?

None !

As it turned out, the message "ORA-01405: fetched column value is NULL" appears at intersection of:
-    Parallel SQL (parallel_degree_policy=AUTO)
-    parallel_degree_limit=IO 
-    upgrade 12.1 => 18c


REASON:

Oracle changed data format stored in the resource_io_calibrate$.ADDITIONAL_INFO column
and after the upgrade 18c database cannot recognise the content of this column.
As oracle say: " The DDL for resource_io_calibrate$ is different between 12c and 18.5.
the upgrade isn't handling the new ADDITIONAL_INFO column correctly"

So, if you use the parallel_degree_limit=IO + parallel_degree_policy=AUTO and there is data in the resource_io_calibrate$ table then can  obtain ORA-01405.

After we set parallel_degree_limit=CPU the error is disappeared.


SOLUTIONs:

- delete from resource_io_calibrate$;
  commit;
  exec dbms_resource_manager.calibrate_io - populate resource_io_calibrate$ with new values.

  OR

- change the parallel_degree_limit to CPU
  (an i recommend delete from resource_io_calibrate$; to avoid future errors)


Getting ORA-604, ORA-01405: fetched column value is NULL when PARALLEL_DEGREE_LIMIT = IO in 18c (Doc ID 2537431.1)

Saturday, September 7, 2019

Exadata image 18.1.19 pitfall

After Exadata image 18.1.19 have been installed we obtained some errors, concerned to new process creation:

The "tail  alert.log" bring the message:
$ tail -100f alert.log
-bash: fork: retry: Resource temporarily unavailable
-bash: fork: retry: Resource temporarily unavailable


After some time after reboot and DBs have started we were unable to switch to oracle account:
# su - oracle
su: /bin/bash: Resource temporarily unavailable


Because the problem is concerned to new process creation,  we checked some files and found they are ok.We found that these files were old enough, nobody modified it:
/etc/sysctl.conf
/etc/security/limits.conf
oracle soft nproc 400000
oracle hard nproc 400000

But "ulimit" show  the very different values:
[oracle@exa6dbadm01 ~]$ ulimit -u
2047

The number of oracle processes is close to 2048 value:
# ps -ef | grep oracle | wc -l
2326

The linux reboot didn't bring new ulimit values.

The "strace" is the way to found the root cause:

# strace su - oracle
...
open("/etc/security/limits.d/90-nproc.conf", O_RDONLY) = 3
...
setrlimit(RLIMIT_NPROC, {rlim_cur=2047, rlim_max=400000}) = 0
...


# cat /etc/security/limits.d/90-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.
*          soft    nproc     1024
root       soft    nproc     unlimited
#Oracle recommended value for nproc is set to 2047 for user oracle
oracle  soft  nproc  2047


After we changed the "oracle  soft  nproc " to 400000 the system is work ok.
Bingo !


Friday, September 6, 2019

In defense of HCC: ORA-39726: unsupported add/drop column operation on compressed tables

A friend asked to check the

ISSUE:
ORA-39726 is hit when adding column with default value to basic compressed table.

CAUSE:
compressed table will not allow some operation, which are allowed in uncompressed table.


"Is there any error during add column with default value to table compressed with HCC ?
Does Oracle unpack HCC data during add column with default value to the table compressed with HCC ?"  he asked.

----------------------------------------------------------

And the answer is here.

RDBMS version is 12.1.0.2.171017 (i choosed old enough version i can find. newer versions belived work the same).

$ opatch lspatches
26635845;Database PSU 12.1.0.2.171017, Oracle JavaVM Component (OCT2017)
25729214;OCW Interim patch for 25729214
26717470;Database Bundle Patch : 12.1.0.2.171017 (26717470)



$ sqlplus / as sysdba
SQL> create tablespace yu extent management local segment space management auto;
SQL> create user yu identified by yu default tablespace yu;
SQL> grant dba to yu ;

SQL> conn yu/yu

SQL> create table NOCOMP as select owner,object_type,object_name,object_id,created from dba_objects order by owner,object_type,object_name;
insert into nocomp select * from nocomp;
insert into nocomp select * from nocomp;
insert into nocomp select * from nocomp;
insert into nocomp select * from nocomp;
commit;

Table created.

SQL>
151156 rows created.

SQL>
302312 rows created.

SQL>
604624 rows created.

SQL>
1209248 rows created.

SQL>
Commit complete.

SQL> select table_name from user_tables;

TABLE_NAME
--------------------------------------------
NOCOMP

SQL> select segment_name, bytes, bytes/1048576 MB, blocks from user_segments order by 1;

SEGMENT_NAME        BYTES   MB    BLOCKS
-------------- ---------- ------- ----------
NOCOMP          159383552   152   9728


SQL> create table BASICOMP compress as select * from NOCOMP;

create table OLTPCOMP  compress for OLTP as select * from NOCOMP;

create table QL   compress for query LOW as select * from NOCOMP;
create table QLRL compress for query LOW row level locking as select * from NOCOMP;

create table QH   compress for query HIGH as select * from NOCOMP;
create table QHRL compress for query HIGH row level locking as select * from NOCOMP;

create table AL   compress for archive LOW as select * from NOCOMP;
create table ALRL compress for archive LOW row level locking as select * from NOCOMP;

create table AH   compress for archive HIGH as select * from NOCOMP;
create table AHRL compress for archive HIGH row level locking as select * from NOCOMP;


Table created.

SQL>
Table created.

SQL> SQL>
Table created.

SQL>
Table created.

SQL> SQL>
Table created.

SQL>
Table created.

SQL> SQL>
Table created.

SQL>
Table created.

SQL> SQL>
Table created.

SQL>
Table created.

SQL> exec dbms_stats.gather_schema_stats('YU');
PL/SQL procedure successfully completed.

SQL> col segment_name for a8
SQL> select segment_name, bytes, bytes/1048576 MB, blocks from user_segments order by 2 desc;

SEGMENT_      BYTES        MB     BLOCKS
-------- ---------- ---------- ---------

NOCOMP    159383552       152       9728
OLTPCOMP  100663296        96       6144
BASICOMP   92274688        88       5632
QLRL       47185920        45       2880
QL         44040192        42       2688
QHRL       26214400        25       1600
ALRL       25165824        24       1536
QH         24117248        23       1472
AL         23068672        22       1408
AHRL       18874368        18       1152
AH         16777216        16       1024

SQL> col table_name for a8
SQL> select table_name,num_rows,blocks from user_tables order by blocks desc;

TABLE_NA   NUM_ROWS    BLOCKS
-------- ---------- ---------
NOCOMP      2418496      9652
OLTPCOMP    2418496      5799
BASICOMP    2418496      5211
QLRL        2418496      2848
QL          2418496      2670
QHRL        2418496      1577
ALRL        2418496      1492
QH          2418496      1456
AL          2418496      1400
AHRL        2418496      1148
AH          2418496      1010

SQL> 

alter table NOCOMP   add new_column number(3) default 0;
alter table BASICOMP add new_column number(3) default 0;
alter table OLTPCOMP add new_column number(3) default 0;
alter table QLRL     add new_column number(3) default 0;
alter table QL       add new_column number(3) default 0;
alter table QHRL     add new_column number(3) default 0;
alter table ALRL     add new_column number(3) default 0;
alter table QH       add new_column number(3) default 0;
alter table AL       add new_column number(3) default 0;
alter table AHRL     add new_column number(3) default 0;
alter table AH       add new_column number(3) default 0;
Table altered.

alter table BASICOMP add new_column number(3) default 0
                        *
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL>
Table altered.

SQL> exec dbms_stats.gather_schema_stats('YU');
PL/SQL procedure successfully completed.

SQL> col segment_name for a8
SQL> select segment_name, bytes, bytes/1048576 MB, blocks from user_segments order by 2 desc;

SEGMENT_      BYTES     MB   BLOCKS  BLOCKS before add column
-------- ---------- ------- ------- --------------------------

NOCOMP    159383552    152     9728    9728
OLTPCOMP  100663296     96     6144    6144
BASICOMP   92274688     88     5632    5632
QLRL       47185920     45     2880    2880
QL         44040192     42     2688    2688
QHRL       26214400     25     1600    1600
ALRL       25165824     24     1536    1536
QH         24117248     23     1472    1472
AL         23068672     22     1408    1408
AHRL       18874368     18     1152    1152
AH         16777216     16     1024    1024

SQL> col table_name for a8
SQL> select table_name,num_rows,blocks from user_tables order by blocks desc;

TABLE_NA   NUM_ROWS    BLOCKS   BLOCKS before add column

-------- ---------- ----------   ------------------------
NOCOMP      2418496      9652     9652
OLTPCOMP    2418496      5799     5799
BASICOMP    2418496      5211     5211
QLRL        2418496      2848     2848
QL          2418496      2670     2670
QHRL        2418496      1577     1577
ALRL        2418496      1492     1492
QH          2418496      1456     1456
AL          2418496      1400     1400
AHRL        2418496      1148     1148
AH          2418496      1010     1010


No one block is added after ADD COLUMN modification!

As you can see the only issue for BASIC COMPRESSION:

alter table BASICOMP add new_column number(3) default 0
                         *
ERROR at line 1:
ORA-39726: unsupported add/drop column operation on compressed tables

In other cases "alter table add column" work well: no decompression at all.
As the DOC 12.1.0.2 say:
"... the default value is stored as metadata, the column itself is not populated with data"




Wednesday, April 17, 2019

How to measure network throughput with RMAN ?

Implementing the ZDLRA into customer's environment we often need to measure network throughput. The easy way is to use RMAN. 

I assume you prepared customer database to work with ZDLRA (created wallet, installed libra.so ) and you can backup this database or its datafiles to ZDLRA without errors.
But if the backup run slow, then you need to test network between protected database and ZDLRA.

This step is to prepare the nettest.rman file:

$ cat nettest.rman
# rman target / |tee /home/oracle/rman_log/nettest_`date +%d%H%M`.log
RUN  {
ALLOCATE CHANNEL ch01 DEVICE TYPE sbt_tape PARMS 'SBT_LIBRARY=/home/oracle/zdlra/libra.so,ENV=(RA_WALLET=LOCATION=file:/home/oracle/zdlra/ CREDENTIAL_ALIAS=zdlra_ib)' FORMAT '%I_%T_%p_%u_%c';

SEND CHANNEL ch01 'NETTEST BACKUP 1024M';


Then connect to target database and run this script:

$ rman target / |tee /home/oracle/rman_log/nettest_`date +%d%H%M`.log
Recovery Manager: Release 12.2.0.1.0 - Production on Wed Apr 17 09:46:02 2019
Copyright (c) 1982, 2017, Oracle and/or its affiliates.  All rights reserved.
connected to target database: BR (DBID=1302487780)

RMAN> @nettest.rman

RMAN> # rman target / |tee /home/oracle/rman_log/nettest_`date +%d%H%M`.log
2> RUN  {
3> ALLOCATE CHANNEL ch01 DEVICE TYPE sbt_tape PARMS 'SBT_LIBRARY=/home/oracle/zdlra/libra.so,ENV=(RA_WALLET=LOCATION=file:/home/oracle/zdlra/ CREDENTIAL_ALIAS=zdlra_ib)' FORMAT '%I_%T_%p_%u_%c';
4> SEND CHANNEL ch01 'NETTEST BACKUP 1024M';
5> }

using target database control file instead of recovery catalog
allocated channel: ch01
channel ch01: SID=1460 instance=br1 device type=SBT_TAPE
channel ch01: RA Library (ZDLRA) SID=86B53DD7A23715B3E053E20F1FAC745F

RMAN-06918: warning: allocated SBT channel to the Recovery Appliance in NOCATALOG mode

released channel: ch01
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of send command at 04/17/2019 09:46:17
ORA-19559: error sending device command: NETTEST BACKUP 1024M
ORA-19557: device error, device type: SBT_TAPE, device name:
ORA-27194: skgfdvcmd: sbtcommand returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
   KBHS-00402: NETTEST sucessfully completed
KBHS-00400: NETTEST BACKUP: 1073741824 bytes sent in 4033428 microseconds

RMAN>
RMAN> **end-of-file**

RMAN>

Recovery Manager complete.



Don't pay attention to the errors, they doesn't matter.
Look the numbers in the line
NETTEST BACKUP: 1073741824 bytes sent in 4033428 microseconds

RMAN transferred 1g during about 4 seconds via my network .

The same way you can test the restore operation:
Syntax: NETTEST {[BACKUP | RESTORE]} [<size>M];


RMAN> @nettest.rman

RMAN> # rman target / |tee /home/oracle/rman_log/nettest_`date +%d%H%M`.log
2> RUN  {
3> ALLOCATE CHANNEL ch01 DEVICE TYPE sbt_tape PARMS 'SBT_LIBRARY=/home/oracle/zdlra/libra.so,ENV=(RA_WALLET=LOCATION=file:/home/oracle/zdlra/ CREDENTIAL_ALIAS=zdlra_ib)' FORMAT '%I_%T_%p_%u_%c';
4>
5> #SEND CHANNEL ch01 'NETTEST BACKUP 1024M';
6> SEND CHANNEL ch01 'NETTEST RESTORE 1024M';
7> }
using target database control file instead of recovery catalog
allocated channel: ch01
channel ch01: SID=127 instance=br1 device type=SBT_TAPE
channel ch01: RA Library (ZDLRA) SID=86B5F9BE6256D7B5E053E20F1FACB9D2
RMAN-06918: warning: allocated SBT channel to the Recovery Appliance in NOCATALOG mode

released channel: ch01
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of send command at 04/17/2019 10:38:47
ORA-19559: error sending device command: NETTEST RESTORE 1024M
ORA-19557: device error, device type: SBT_TAPE, device name:
ORA-27194: skgfdvcmd: sbtcommand returned error
ORA-19511: non RMAN, but media manager or vendor specific failure, error text:
   KBHS-00402: NETTEST sucessfully completed
KBHS-00401: NETTEST RESTORE: 1073741824 bytes received in 813152 microseconds



Source:
How to measure network performance from RMAN for ZDLRA or Cloud Backups (Doc ID 2371860.1)

Wednesday, April 3, 2019

Exadata X8 released

Exadata X8 released:

https://www.oracle.com/engineered-systems/exadata/database-machine-x8/

Cells of new type at $ 16500:
https://www.oracle.com/assets/exadata-pricelist-070598.pdf

Saturday, February 16, 2019

ORA-15017: diskgroup ... cannot be mounted




Some days ago (preparing an Exadata Storage Snapshots  environment ) i changed the asm_diskgroup in ASM parameter file. Before change this parameter pointed diskgroups RECO and DATA:
asm_diskgroups = 'DATAC1, RECOC1'

I dropped diskgroup RECOC1, created SPARSE on the RECOC1 space and changed this parameter to the value:  
asm_diskgroups = 'DATAC1', 'SPARSE'

After a day or two i noticed the message in the ASM aler log which appeared every minute and mentioned RECO disk group:
ERROR: ALTER DISKGROUP RECOC1 MOUNT  /* asm agent *//* {1:40706:63383} */
SQL> ALTER DISKGROUP RECOC1 MOUNT  /* asm agent *//* {1:40706:63408} */
NOTE: cache registered group RECOC1 4/0xAAB48873
NOTE: cache began mount (first) of group RECOC1 4/0xAAB48873
ERROR: no read quorum in group: required 2, found 0 disks
NOTE: cache dismounting (clean) group 4/0xAAB48873 (RECOC1)
NOTE: messaging CKPT to quiesce pins Unix process pid: 213192, image: oracle@ed03dbadm01.distr.fors.ru (TNS V1-V3)
NOTE: dbwr not being msg'd to dismount
NOTE: LGWR not being messaged to dismount
NOTE: cache dismounted group 4/0xAAB48873 (RECOC1)
NOTE: cache ending mount (fail) of group RECOC1 number=4 incarn=0xaab48873
NOTE: cache deleting context for group RECOC1 4/0xaab48873
GMON dismounting group 4 at 63274 for pid 42, osid 213192
ERROR: diskgroup RECOC1 was not mounted
ORA-15032: not all alterations performed
ORA-15017: diskgroup "RECOC1" cannot be mounted
ORA-15040: diskgroup is incomplete

Very strange!
This message indicates that the list of groups is stored somewhere else in the Grid.



The solution is easy:
[grid@ed03dbadm01 ~]$ srvctl -h |grep diskgroup


Usage: srvctl start diskgroup -diskgroup <dg_name> [-node "<node_list>"]
Usage: srvctl stop diskgroup -diskgroup <dg_name> [-node "<node_list>"] [-force]
Usage: srvctl status diskgroup -diskgroup <dg_name> [-node "<node_list>"] [-detail] [-verbose]

Usage: srvctl enable diskgroup -diskgroup <dg_name> [-node "<node_list>"]
Usage: srvctl disable diskgroup -diskgroup <dg_name> [-node "<node_list>"]
Usage: srvctl remove diskgroup -diskgroup <dg_name> [-force]
Usage: srvctl predict diskgroup -diskgroup <diskgroup_name> [-verbose]

[grid@ed03dbadm01 ~]$ srvctl status diskgroup -diskgroup recoc1
Disk Group recoc1 is not running
[grid@ed03dbadm01 ~]$ srvctl remove diskgroup -diskgroup RECOC1
 


And the message no longer occur !

Friday, February 15, 2019

Exadata Storage Snapshots, Part5: Clone from Clone

(start is here)

Clone from clone


We want to make clone database KK from our clone database K.

I stopped 2nd instance of K on 2nd RAC node and have 1st instance K working because I need "create controlfile".


At the 1st node make preparation:

SQL> ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

SQL> SET newpage 0
SET linesize 999
SET pagesize 0
SET feedback off
SET heading off
SET echo off
SET space 0
SET tab off
SET trimspool on
SPOOL rename.sql
SELECT 'EXECUTE dbms_dnfs.clonedb_renamefile('||''''||name||''''||','||''''||REPLACE(REPLACE(REPLACESQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> SQL> (name,'.','_'),'K','KK'),'+DATA','+SPARSE')||''''||');'
FROM v$datafile;


I prepared 3 files:

-rw-r----- 1 oracle oinstall 791 Jan 29 17:14 cf.sql
-rw-r----- 1 oracle oinstall 965 Jan 29 17:18 initkk.ora
-rw-r--r-- 1 oracle oinstall 746 Jan 29 16:57 rename.sql

$ cat initkk.ora


audit_file_dest=/u01/app/oracle/admin/kk/adump  # $ mkdir -p /u01/app/oracle/admin/kk/adump
*.cluster_database=FALSE
kk2.cluster_interconnects='192.168.13.2'
kk1.cluster_interconnects='192.168.13.1'
*.compatible='18.0.0.0.0'
*.control_files='+DATAC1/kk/CONTROLFILE/CF1.dbf'
*.db_block_size=8192
*.db_create_file_dest='+DATAC1'
*.db_files=1024
*.db_name='kk'
*.db_recovery_file_dest_size=100g
*.db_recovery_file_dest='+SPARSE'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP)(SERVICE=kkXDB)'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
kk2.instance_number=2
kk1.instance_number=1
*.local_listener='-oraagent-dummy-'
*.log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST'
*.log_archive_format='%t_%s_%r.dbf'
*.open_cursors=1000
*.parallel_threads_per_cpu=1
*.pga_aggregate_target=6442450944
*.processes=2048
*.remote_login_passwordfile='exclusive'
*.sga_target=8192m
kk2.thread=2
kk1.thread=1
kk2.undo_tablespace='UNDOTBS2'
kk1.undo_tablespace='UNDOTBS1'

CREATE CONTROLFILE REUSE SET DATABASE kk RESETLOGS ARCHIVELOG
    MAXLOGFILES 1024
    MAXLOGMEMBERS 5
    MAXDATAFILES 32767
    MAXINSTANCES 32
    MAXLOGHISTORY 33012
LOGFILE
  GROUP 1 '+DATAC1' SIZE 4096M ,
  GROUP 2 '+DATAC1' SIZE 4096M ,
  GROUP 3 '+DATAC1' SIZE 4096M
DATAFILE
  '+SPARSE/K/DATAFILE/system_260_998393855',
  '+SPARSE/K/DATAFILE/sysaux_261_998393857',
  '+SPARSE/K/DATAFILE/undotbs1_262_998393859',
  '+SPARSE/K/DATAFILE/undotbs2_264_998393867',
  '+SPARSE/K/DATAFILE/users_265_998393867',
  '+SPARSE/K/DATAFILE/tpch_291_998477197'

CHARACTER SET AL32UTF8 ;


$ cat rename_files.sql
EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855');
EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/sysaux_261_998393857','+SPARSE/KK/DATAFILE/sysaux_261_998393857');
EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/undotbs1_262_998393859','+SPARSE/KK/DATAFILE/undotbs1_262_998393859');
EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/undotbs2_264_998393867','+SPARSE/KK/DATAFILE/undotbs2_264_998393867');
EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/users_265_998393867','+SPARSE/KK/DATAFILE/users_265_998393867');
EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/tpch_291_998477197','+SPARSE/KK/DATAFILE/tpch_291_998477197');


Look at K datafiles :

[grid@ed03dbadm01 ~]$ asmcmd
ASMCMD> cd spa*/k/da*
ASMCMD> ls -l
Type      Redund  Striped  Time             Sys  Name
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  N    sysaux_261_998393857 => +SPARSE/K/DATAFILE/sysaux_261_998393857.264.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  Y    sysaux_261_998393857.264.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  N    system_260_998393855 => +SPARSE/K/DATAFILE/system_260_998393855.265.998749747
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  Y    system_260_998393855.265.998749747
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  N    tpch_291_998477197 => +SPARSE/K/DATAFILE/tpch_291_998477197.274.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  Y    tpch_291_998477197.274.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  N    undotbs1_262_998393859 => +SPARSE/K/DATAFILE/undotbs1_262_998393859.263.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  Y    undotbs1_262_998393859.263.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  N    undotbs2_264_998393867 => +SPARSE/K/DATAFILE/undotbs2_264_998393867.262.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  Y    undotbs2_264_998393867.262.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  N    users_265_998393867 => +SPARSE/K/DATAFILE/users_265_998393867.281.998749765
DATAFILE  MIRROR  COARSE   JAN 29 16:00:00  Y    users_265_998393867.281.99874976


I intentionally don't shutdown the source database K to demonstrate the errors.

Let's make the clone:

[oracle@ed03dbadm01 klone_from_klone]$ sqlplus / as sysdba

Connected to an idle instance.
SQL> startup nomount pfile='initkk.ora'
ORACLE instance started.

Total System Global Area 8589933416 bytes
Fixed Size           12217192 bytes
Variable Size         2264924160 bytes
Database Buffers     6241124352 bytes
Redo Buffers           71667712 bytes

SQL> @cf

Control file created.
          
SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855');
BEGIN dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855'); END;

*
ERROR at line 1:
ORA-17515: Creation of clonedb failed using snapshot file
+SPARSE/K/DATAFILE/system_260_998393855
ORA-17515: Creation of clonedb failed using snapshot file
+SPARSE/K/DATAFILE/system_260_998393855
ORA-15304: operation requires ACCESS_CONTROL.ENABLED attribute to be TRUE
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 1

Our mistake: SPARSE ASM diskgoup hasn't set 'ACCESS_CONTROL.ENABLED' = 'TRUE'

[grid]$ sqlplus / as sysasm

SQL> ALTER DISKGROUP SPARSE SET ATTRIBUTE 'ACCESS_CONTROL.ENABLED' = 'TRUE';

Diskgroup altered.


Repeat rename.sql at KK database:

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855');
BEGIN dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855'); END;

*
ERROR at line 1:
ORA-17515: Creation of clonedb failed using snapshot file
+SPARSE/K/DATAFILE/system_260_998393855
ORA-17515: Creation of clonedb failed using snapshot file
+SPARSE/K/DATAFILE/system_260_998393855
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 1

The source database is not shutdowned clearly.
Oracle, please make this message more informative, for example "source DB is not in Read Only state" ;) .

Connect to K and shutdown it:

[oracle@ed03dbadm01 ~]$ . k
[oracle@ed03dbadm01 ~]$ sqlplus / as sysdba
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.


Back to KK:

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855');
BEGIN dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855'); END;

*
ERROR at line 1:
ORA-17515: Creation of clonedb failed using snapshot file
+SPARSE/K/DATAFILE/system_260_998393855
ORA-17515: Creation of clonedb failed using snapshot file
+SPARSE/K/DATAFILE/system_260_998393855
ORA-15260: permission denied on ASM disk group
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 1

Permission denied !


At ASM:
$ sqlplus / as sysasm
ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/system_260_998393855.265.998749747';  
ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/sysaux_261_998393857';  
ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/undotbs1_262_998393859';
ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/undotbs2_264_998393867';
ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/users_265_998393867';
ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/tpch_291_998477197';    


SQL> ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/system_260_998393855.265.998749747';  
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/sysaux_261_998393857';  
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/undotbs1_262_998393859';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/undotbs2_264_998393867';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/users_265_998393867';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read ONLY, group=read ONLY, other=none for file '+SPARSE/K/DATAFILE/tpch_291_998477197';    
Diskgroup altered.



And obtain error one more time:

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855');
BEGIN dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855'); END;

*
ERROR at line 1:
ORA-01565: error in identifying file '+SPARSE/K/DATAFILE/system_260_998393855'
ORA-17503: ksfdopn:2 Failed to open file
+SPARSE/K/DATAFILE/system_260_998393855
ORA-15260: permission denied on ASM disk group
ORA-06512: at "SYS.X$DBMS_DNFS", line 10
ORA-06512: at line 1


Look at ASM file permissions :

[grid@ed03dbadm01 ~]$ asmcmd ls --permission +SPARSE/K/DATAFILE
User  Group  Permission  Name
              r--r-----  sysaux_261_998393857 => +SPARSE/K/DATAFILE/sysaux_261_998393857.264.998749765
              r--r-----  sysaux_261_998393857.264.998749765
              r--r-----  system_260_998393855 => +SPARSE/K/DATAFILE/system_260_998393855.265.998749747
              r--r-----  system_260_998393855.265.998749747
              r--r-----  tpch_291_998477197 => +SPARSE/K/DATAFILE/tpch_291_998477197.274.998749765
              r--r-----  tpch_291_998477197.274.998749765
              r--r-----  undotbs1_262_998393859 => +SPARSE/K/DATAFILE/undotbs1_262_998393859.263.998749765
              r--r-----  undotbs1_262_998393859.263.998749765
              r--r-----  undotbs2_264_998393867 => +SPARSE/K/DATAFILE/undotbs2_264_998393867.262.998749765
              r--r-----  undotbs2_264_998393867.262.998749765
              r--r-----  users_265_998393867 => +SPARSE/K/DATAFILE/users_265_998393867.281.998749765
              r--r-----  users_265_998393867.281.998749765

See there is no User & Group permissions on the K database.
The solution:

ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/system_260_998393855';
ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/sysaux_261_998393857';
ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/undotbs1_262_998393859';
ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/undotbs2_264_998393867';
ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/users_265_998393867';
ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/tpch_291_998477197';


SQL> ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/system_260_998393855';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/sysaux_261_998393857';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/undotbs1_262_998393859';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/undotbs2_264_998393867';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/users_265_998393867';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/tpch_291_998477197';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE SET OWNERSHIP OWNER='oracle' FOR FILE '+SPARSE/K/DATAFILE/tpch_291_998477197';
Diskgroup altered.

Now we see the User appeared for ASM files:

[grid@ed03dbadm01 ~]$ asmcmd ls --permission +SPARSE/K/DATAFILE
User                                     Group  Permission  Name
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  sysaux_261_998393857 => +SPARSE/K/DATAFILE/sysaux_261_998393857.264.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  sysaux_261_998393857.264.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  system_260_998393855 => +SPARSE/K/DATAFILE/system_260_998393855.265.998749747
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  system_260_998393855.265.998749747
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  tpch_291_998477197 => +SPARSE/K/DATAFILE/tpch_291_998477197.274.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  tpch_291_998477197.274.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  undotbs1_262_998393859 => +SPARSE/K/DATAFILE/undotbs1_262_998393859.263.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  undotbs1_262_998393859.263.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  undotbs2_264_998393867 => +SPARSE/K/DATAFILE/undotbs2_264_998393867.262.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  undotbs2_264_998393867.262.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  users_265_998393867 => +SPARSE/K/DATAFILE/users_265_998393867.281.998749765
oracle@697ac408d88aff8bbfa7863a8fbb6e8d          r--r-----  users_265_998393867.281.998749765


And our rename command now work well:

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/system_260_998393855','+SPARSE/KK/DATAFILE/system_260_998393855');
PL/SQL procedure successfully completed.

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/sysaux_261_998393857','+SPARSE/KK/DATAFILE/sysaux_261_998393857');
PL/SQL procedure successfully completed.

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/undotbs1_262_998393859','+SPARSE/KK/DATAFILE/undotbs1_262_998393859');
PL/SQL procedure successfully completed.

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/undotbs2_264_998393867','+SPARSE/KK/DATAFILE/undotbs2_264_998393867');
PL/SQL procedure successfully completed.

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/users_265_998393867','+SPARSE/KK/DATAFILE/users_265_998393867');
PL/SQL procedure successfully completed.

SQL> EXECUTE dbms_dnfs.clonedb_renamefile('+SPARSE/K/DATAFILE/tpch_291_998477197','+SPARSE/KK/DATAFILE/tpch_291_998477197');
PL/SQL procedure successfully completed.



SQL> alter database open resetlogs;
*
ERROR at line 1:
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: '+SPARSE/KK/DATAFILE/system_260_998393855'

This error appeared because KK's controlfile is newer than K's datafiles.
We shutdowned K after controlfile for KK was created.
Solution:

SQL> alter database recover until cancel using backup controlfile;
alter database recover until cancel using backup controlfile
*
ERROR at line 1:
ORA-00279: change 2543440 generated at 01/29/2019 17:30:18 needed for thread 1
ORA-00289: suggestion : +DATAC1
ORA-15173: entry 'ARCHIVELOG' does not exist in directory 'KK'
ORA-00280: change 2543440 for thread 1 is in sequence #2


SQL> alter database recover cancel;

Database altered.

SQL> alter database open resetlogs;

Database altered.


The KK (clone from clone) is created !

Let see the V$CLONEDFILE (aka x$ksfdsscloneinfo)

SQL> col num for 999
SQL> col parent for a42
SQL> col child for a42
SQL> SELECT filenumber num, snapshotfilename parent, clonefilename child FROM x$ksfdsscloneinfo;

 NUM PARENT                    CHILD
---- ------------------------------------------ ------------------------------------------
   1 +SPARSE/K/DATAFILE/system_260_998393855    +SPARSE/KK/DATAFILE/system_260_998393855
   2 +SPARSE/K/DATAFILE/sysaux_261_998393857    +SPARSE/KK/DATAFILE/sysaux_261_998393857
   3 +SPARSE/K/DATAFILE/undotbs1_262_998393859    +SPARSE/KK/DATAFILE/undotbs1_262_998393859
   4 +SPARSE/K/DATAFILE/undotbs2_264_998393867    +SPARSE/KK/DATAFILE/undotbs2_264_998393867
   5 +SPARSE/K/DATAFILE/users_265_998393867    +SPARSE/KK/DATAFILE/users_265_998393867
   6 +SPARSE/K/DATAFILE/tpch_291_998477197    +SPARSE/KK/DATAFILE/tpch_291_998477197

6 rows selected.

At the end of process we have the test master database "TM".
From TM we created 1st level clone database "K" - child of TM.
From 1st level clone K we created 2nd level clone KK - child database of K.

Now we have running KK and stopped K and TM databases.
What is we open READ-WRITE the K ?


[oracle@ed03dbadm01 ~]$ srvctl start db -d k
PRCR-1079 : Failed to start resource ora.k.db
CRS-5017: The resource action "ora.k.db start" encountered the following error:
ORA-03113: end-of-file on communication channel
Process ID: 379727
Session ID: 1167 Serial number: 15923
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/ed03dbadm01/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.k.db' on 'ed03dbadm01' failed
CRS-2632: There are no more servers to try to place resource 'ora.k.db' on that would satisfy its placement policy
CRS-5017: The resource action "ora.k.db start" encountered the following error:
ORA-03113: end-of-file on communication channel
Process ID: 241835
Session ID: 1070 Serial number: 36687
. For details refer to "(:CLSN00107:)" in "/u01/app/grid/diag/crs/ed03dbadm02/crs/trace/crsd_oraagent_oracle.trc".

CRS-2674: Start of 'ora.k.db' on 'ed03dbadm02' failed


Look at K's alert.log:




ALTER DATABASE MOUNT
...
ALTER DATABASE OPEN
...
Errors in file /u01/app/oracle/diag/rdbms/k/k1/trace/k1_lgwr_379662.trc:
ORA-01110: data file 2: '+SPARSE/K/DATAFILE/sysaux_261_998393857'
ORA-01114: IO error writing block to file 2 (block # 1)
ORA-27009: cannot write to file opened for read

Errors in file /u01/app/oracle/diag/rdbms/k/k1/trace/k1_lgwr_379662.trc:
ORA-01110: data file 1: '+SPARSE/K/DATAFILE/system_260_998393855'
ORA-01114: IO error writing block to file 1 (block # 1)
ORA-27009: cannot write to file opened for read

Errors in file /u01/app/oracle/diag/rdbms/k/k1/trace/k1_lgwr_379662.trc:
ORA-01114: IO error writing block to file 3 (block # 1)
ORA-27009: cannot write to file opened for read
ORA-01114: IO error writing block to file 4 (block # 1)
ORA-27009: cannot write to file opened for read
ORA-17528: A read-only file or a file opened read-only cannot be written to: +SPARSE/K/DATAFILE/system_260_998393855.

Errors in file /u01/app/oracle/diag/rdbms/k/k1/trace/k1_ora_379727.trc:
ORA-01114: IO error writing block to file 1 (block # )
2019-01-30T12:56:21.435186+03:00
...
Dumping diagnostic data in directory=[cdmp_20190130125621], requested by (instance=1, osid=379727), summary=[abnormal instance termination].Instance terminated by USER, pid = 379727


ASM files are READ-ONLY at ASM leve, protection is working!

Set datafiles RW:

SQL> ALTER DISKGROUP SPARSE set permission owner=read write, group=read write, other=none for file '+SPARSE/K/DATAFILE/system_260_998393855.265.998749747';  
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read write, group=read write, other=none for file '+SPARSE/K/DATAFILE/sysaux_261_998393857';  
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read write, group=read write, other=none for file '+SPARSE/K/DATAFILE/undotbs1_262_998393859';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read write, group=read write, other=none for file '+SPARSE/K/DATAFILE/undotbs2_264_998393867';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read write, group=read write, other=none for file '+SPARSE/K/DATAFILE/users_265_998393867';
Diskgroup altered.

SQL> ALTER DISKGROUP SPARSE set permission owner=read write, group=read write, other=none for file '+SPARSE/K/DATAFILE/tpch_291_998477197';    
Diskgroup altered.


Then we start database K:

SQL> startup
ORACLE instance started.

Total System Global Area 8589933416 bytes
Fixed Size           12217192 bytes
Variable Size         2415919104 bytes
Database Buffers     6090129408 bytes
Redo Buffers           71667712 bytes
Database mounted.
Database opened.


How KK feels itcelf ?

At the moment there is no any messages in alert log. So, let's restart KK:
SQL> shutdown immediate

ORA-03113: end-of-file on communication channel
Process ID: 335855
Session ID: 390 Serial number: 8981

Alert_KK.log

Shutting down instance (immediate) (OS id: 335855)

Stopping background process SMCO

Shutting down instance: further logons disabled

Stopping background process CJQ0
Stopping background process MMNL

Stopping background process MMON
License high water mark = 34
OS process OFSD (ospid 211761) idle for 30 seconds, exiting
Dispatchers and shared servers shutdown
ALTER DATABASE CLOSE NORMAL
Stopping Emon pool

IM on ADG: Start of Empty Journal

IM on ADG: End of Empty Journal
Stopping Emon pool
stopping change tracking

Errors in file /u01/app/oracle/diag/rdbms/kk/kk1/trace/kk1_lgwr_211799.trc:
ORA-01243: system tablespace file suffered media failure
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '+SPARSE/KK/DATAFILE/system_260_998393855'
ORA-17532: snapshot (parent)=+SPARSE/K/DATAFILE/system_260_998393855 checkpoint SCN 2543451 is different from snapshot checkpoint SCN 2543440 recorded in file +SPARSE/KK/DATAFILE/system_260_998393855
ORA-17531: snapshot (parent)=+SPARSE/K/DATAFILE/system_260_998393855 modification time 01/30/2019 13:16:09 is different from snapshot creation time 01/29/2019 17:30:18 recorded in file +SPARSE/KK/DATAFILE/system_260_998393855
ORA-17530: snapshot (parent), +SPARSE/K/DATAFILE/system_260_998393855, modified after clone (child) created, +SPARSE/KK/DATAFILE/system_260_998393855


Errors in file /u01/app/oracle/diag/rdbms/kk/kk1/trace/kk1_lgwr_211799.trc:
ORA-01243: system tablespace file suffered media failure
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '+SPARSE/KK/DATAFILE/system_260_998393855'
ORA-17532: snapshot (parent)=+SPARSE/K/DATAFILE/system_260_998393855 checkpoint SCN 2543451 is different from snapshot checkpoint SCN 2543440 recorded in file +SPARSE/KK/DATAFILE/system_260_998393855
ORA-17531: snapshot (parent)=+SPARSE/K/DATAFILE/system_260_998393855 modification time 01/30/2019 13:16:09 is different from snapshot creation time 01/29/2019 17:30:18 recorded in file +SPARSE/KK/DATAFILE/system_260_998393855
ORA-17530: snapshot (parent), +SPARSE/K/DATAFILE/system_260_998393855, modified after clone (child) created, +SPARSE/KK/DATAFILE/system_260_998393855
Errors in file /u01/app/oracle/diag/rdbms/kk/kk1/trace/kk1_lgwr_211799.trc  (incident=249):
ORA-1243 [] [] [] [] [] [] [] [] [] [] [] []
Incident details in: /u01/app/oracle/diag/rdbms/kk/kk1/incident/incdir_249/kk1_lgwr_211799_i249.trc

USER (ospid: 211799): terminating the instance due to ORA error 1243
2019-01-30T13:19:25.122213+03:00
System state dump requested by (instance=1, osid=211799 (LGWR)), summary=[abnormal instance termination]. error - 'Instance is terminating.'
System State dumped to trace file /u01/app/oracle/diag/rdbms/kk/kk1/trace/kk1_diag_211759_20190130131925.trc
2019-01-30T13:19:25.369428+03:00
Dumping diagnostic data in directory=[cdmp_20190130131925], requested by (instance=1, osid=211799 (LGWR)), summary=[abnormal instance termination].
2019-01-30T13:19:26.161385+03:00
DIA0 (ospid: 211780): terminating the instance due to ORA error 1092
Cause - 'Instance is terminating.'

Instance terminated by DIA0, pid = 211780


Ater we open RW the K datapase (the parent of KK), then KK was crashed.




ORA-01405: fetched column value is NULL after upgrade from 12.1 to 18c

SYMPTOMs: After upgrade 12.1.0.2 -> 18.6 we loosed our database obtained constantly reproducable : ORA-00604: error occurred at recu...