Thursday, June 30, 2011

Applying a delta in Progress

If you want to apply a delta to a Progress database, that is, a file that contains information to update objects of a Progress database in order to be equal to other Progress database's objects (like to a production database from a development one), then you could do this with mpro utility:

RHPRGPR1:/mnt0/new:> mpro /mydatabase/mydb.db









@@@@@@ @@@@@@ @@@@@@@ @@@@@ @@@@@@ @@@@@@@ @@@@@ @@@@@
@ @ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @
@@@@@@ @@@@@@ @ @ @ @@@@ @@@@@@ @@@@@ @@@@@ @@@@@
@ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @ @ @ @ @ @ @ @ @ @ @
@ @ @ @@@@@@@ @@@@@ @ @ @@@@@@@ @@@@@ @@@@@

Progress Software Corporation
14 Oak Park
Bedford, Massachusetts 01730
781-280-4000

PROGRESS is a registered trademark of Progress Software Corporation
Copyright 1984-2004
by Progress Software Corporation
All Rights Reserved

After this screen you will be prompted for an user and password; you have to login with an user with enough privileges:

PROGRESS Version 9.1E as of Tue Oct 12 14:40:19 EDT 2004

┌─────────────────── Login ───────────────────┐
│ │
│ Please enter a User Id and Password for │
│ database: mydb │
│ │
│ User Id: DBAUSER │
│ Password: │
│ │
│ (OK) (Cancel) │
└─────────────────────────────────────────────┘












Enter data or press F4 to end.

Then in the next screen you have to press F3 and select the Tools menu, Data Dictionary option:

PROGRESS Version 9.1E as of Tue Oct 12 14:40:19 EDT 2004
File Edit Search Buffer Compile Tools Help
────────────────────────────────┌──────────────────────┐────────────────────────
│ Data Dictionary │
│ OS Shell │
│ Application Compiler │
└──────────────────────┘














─ File: Untitled:1 ─────────────────────────────────────────────────────────────


F1=RUN F3=MENUS F5=OPEN F6=SAVE F8=CLOSE Insert

The next screen will be the Data Dictionary tool:

Database Schema Admin DataServer Utilities PRO/SQL Tools





┌──────────── Welcome to the PROGRESS Data Dictionary ─────────────┐
│ │
│ You may use these facilities to set up or alter the structure of │
│ your database(s) or to perform various administrative functions. │
│ │
└──────────────────────────────────────────────────────────────────┘







Main Menu Data Dictionary
Database: mydb (PROGRESS) Table:

In the Data Dictionary screen you have to select the Admin menu, Load Data and Definitions submenu, Data Definitions (.df file) option:

Database Schema Admin DataServer Utilities PRO/SQL Tools
┌────────────────────────────────────────┐
│ Dump Data and Definitions -> │
│ Load Data and Definitions -> │
│ ┌─────────────────────────────────┐ │
│ │ Data Definitions (.df file)... │ │
┌─────────│ │ Table Contents (.d file)... │ │y ─────────────┐
│ │ │ SQL Views... │... │ │
│ You may └─│ User Table Contents... │────┘e structure of │
│ your datab│ Sequences Current Values... │strative functions. │
│ │ Reconstruct Bad Load Records... │ │
└───────────└─────────────────────────────────┘────────────────────┘







Main Menu Data Dictionary
Database: mydb (PROGRESS) Table:

In order to apply a delta file the database must not be locked by any user, but if so you will get an error message like this:

Database Schema Admin DataServer Utilities PRO/SQL Tools


















Data Definitions (.df file) Data Dictionary
Database: mydb (PROGRESS) Table:


Database in use by SOMEUSER on w1c3xpr3. Wait or press CTRL-C to stop. (388)

If you have to apply this delta and you can safely restart your database or kill the users locking the database then do that, preferably in other terminal; if there are no users locking the database then you will get a screen where you have to select the delta file and a lot of other options that you can keep unchanged.

Finally, after selecting OK, if again there are no users locking the database the delta will be applied very fast and you will go back to the Data Dictionary menu.

Tuesday, June 28, 2011

Getting a profile’s DDL

If you want to copy a profile and you don’t have a frog, then you could use the DBMS_METADATA package to get the DLL you need to recreate it:

SQL> exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR',true);

PL/SQL procedure successfully completed.

SQL> SET LINESIZE 150
SQL> SET PAGESIZE 0
SQL> SET LONG 90000
SQL> SELECT DBMS_METADATA.GET_DDL('PROFILE',PROFILE) from dba_profiles
where PROFILE='MYPROFILE' group by PROFILE;

CREATE PROFILE "MYPROFILE"
LIMIT
COMPOSITE_LIMIT DEFAULT
SESSIONS_PER_USER DEFAULT
CPU_PER_SESSION DEFAULT
CPU_PER_CALL DEFAULT
LOGICAL_READS_PER_SESSION DEFAULT
LOGICAL_READS_PER_CALL DEFAULT
IDLE_TIME UNLIMITED
CONNECT_TIME DEFAULT
PRIVATE_SGA DEFAULT
FAILED_LOGIN_ATTEMPTS DEFAULT
PASSWORD_LIFE_TIME UNLIMITED
PASSWORD_REUSE_TIME DEFAULT
PASSWORD_REUSE_MAX DEFAULT
PASSWORD_VERIFY_FUNCTION "VERIFY_FUNCTION"
PASSWORD_LOCK_TIME DEFAULT
PASSWORD_GRACE_TIME DEFAULT ;

As you can see, there is a verify function for password checking purposes so you have to create the VERIFY_FUNCTION function first in order to create this profile. You can get the source code of this function querying the dba_source view:

SQL> select TEXT from dba_source where NAME='VERIFY_FUNCTION' and TYPE='FUNCTION' order by LINE;
FUNCTION verify_function
(username varchar2,
password varchar2,
old_password varchar2)
RETURN boolean IS
n boolean;
m integer;

...

IF differ < 3 THEN
raise_application_error(-20004, 'Password should differ by at \
least 3 characters');
END IF;
END IF;
END IF;
-- Everything is fine; return TRUE ;
RETURN(TRUE);
END;

113 rows selected.

More information:

57 DBMS_METADATA

Monday, June 27, 2011

Getting a role's DDL

If you want to copy a role and you don't have a frog, then you could use the DBMS_METADATA package to get the DLL you need to recreate it:

SQL> exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR', true);

PL/SQL procedure successfully completed.

SQL> SET PAGESIZE 0
SQL> SET LONG 90000
SQL> SELECT DBMS_METADATA.GET_DDL('ROLE', role) || ';' FROM dba_roles WHERE ROLE='MYROLE';

CREATE ROLE "MYROLE";
;

SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT',role) FROM ROLE_ROLE_PRIVS
WHERE ROLE='MYROLE' group by ROLE;

GRANT "CONNECT" TO "MYROLE";

GRANT "EXP_FULL_DATABASE" TO "MYROLE";


SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT',role) FROM ROLE_SYS_PRIVS
WHERE ROLE='MYROLE' group by ROLE;

no rows selected

SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT',role) FROM ROLE_TAB_PRIVS
WHERE ROLE='MYROLE' group by ROLE;

no rows selected

As you can see, there is an extra ; after the CREATE ROLE sentence that you can just ignore, and for this role there are no system grants nor object grants.

More information:

Oracle Roles
57 DBMS_METADATA

Friday, June 24, 2011

Getting a user's DDL

If you want to copy an user and you don't have a frog, then you could use the DBMS_METADATA package to get the DLL you need to recreate it:

SQL> exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'SQLTERMINATOR', true);

PL/SQL procedure successfully completed.

SQL> SET PAGESIZE 0
SQL> SET LONG 90000
SQL> SELECT DBMS_METADATA.GET_DDL('USER',username) from dba_users where username='MYUSER';

CREATE USER "MYUSER" IDENTIFIED BY VALUES '123456B563D86FE2'
DEFAULT TABLESPACE "MYTABLESPACE"
TEMPORARY TABLESPACE "TEMP"
PROFILE "MYPROFILE";


SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('ROLE_GRANT',GRANTEE) FROM DBA_ROLE_PRIVS
WHERE GRANTEE='MYUSER' group by GRANTEE;

GRANT "MYROLE" TO "MYUSER";


SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('DEFAULT_ROLE','MYUSER') from dual;

ALTER USER "MYUSER" DEFAULT ROLE ALL;


SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('SYSTEM_GRANT',GRANTEE) FROM DBA_SYS_PRIVS
WHERE GRANTEE='MYUSER' group by GRANTEE;

GRANT CREATE TABLE TO "MYUSER";


SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT',GRANTEE) FROM DBA_TAB_PRIVS
WHERE GRANTEE='MXS320A' group by GRANTEE;

no rows selected

SQL> SELECT DBMS_METADATA.GET_GRANTED_DDL('TABLESPACE_QUOTA','MYUSER') from dual;

DECLARE
TEMP_COUNT NUMBER;
SQLSTR VARCHAR2(200);
BEGIN
SQLSTR := 'ALTER USER "MYUSER" QUOTA UNLIMITED ON "MYTABLESPACE"';
EXECUTE IMMEDIATE SQLSTR;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -30041 THEN
SQLSTR := 'SELECT COUNT(*) FROM USER_TABLESPACES
WHERE TABLESPACE_NAME = ''MYTABLESPACE'' AND CONTENTS = ''TEMPORARY''';
EXECUTE IMMEDIATE SQLSTR INTO TEMP_COUNT;
IF TEMP_COUNT = 1 THEN RETURN;
ELSE RAISE;
END IF;
ELSE
RAISE;
END IF;
END;
/

DECLARE
TEMP_COUNT NUMBER;
SQLSTR VARCHAR2(200);
BEGIN
SQLSTR := 'ALTER USER "MYUSER" QUOTA 10485760 ON "SYSTEM"';
EXECUTE IMMEDIATE SQLSTR;
EXCEPTION
WHEN OTHERS THEN
IF SQLCODE = -30041 THEN
SQLSTR := 'SELECT COUNT(*) FROM USER_TABLESPACES
WHERE TABLESPACE_NAME = ''SYSTEM'' AND CONTENTS = ''TEMPORARY''';
EXECUTE IMMEDIATE SQLSTR INTO TEMP_COUNT;
IF TEMP_COUNT = 1 THEN RETURN;
ELSE RAISE;
END IF;
ELSE
RAISE;
END IF;
END;
/

As you can see, you don't get simple ALTER USER QUOTA sentences with the TABLESPACE_QUOTA option so you can execute the entire PL/SQL block or just pick up the ALTER USER QUOTA sentences.

More information:

Copying Oracle Users
57 DBMS_METADATA

Thursday, June 23, 2011

Mapping tablespace usage in Oracle


Lets say that you need to reduce the size of a datafile to reclaim some space (like in an urgency), then after checking the size and usage of one datafile you issue an ALTER DATABASE DATAFILE RESIZE just to get this error:

ORA-03297: file contains used data beyond requested RESIZE value

You may wonder why you cannot reduce the size of that datafile if there is enough free space, and more likely you cannot do that because some segment (table, index, etc.) is placed beyond the resize value. You have to move that segment with techniques like rebuilding indexes and moving tables, but before trying to resize a datafile you might like to map the tablespace usage with this script: Tablespace_map2.sql

Disclaimer: I DIDN'T WRITE THIS SCRIPT, I just modified it to split the maps by datafile as is stated in the comments at the beginning of the script. I was unable to find the original author so I cannot credit him properly.

After running this script you will get an HTML file with a map usage by datafile of the selected tablespace; this script is very handy if you can only run scripts at a SQL prompt. If you place the cursor over a block you will get information about the owner, segment type and name that block belongs to, or if it's a free block.

More information:

ORA-03297 Tips

Wednesday, June 22, 2011

How to kill a user in Progress

If you need to kill a user connected to a Progress database (because locking issues for example), you can get a list and then kill a user with the proshut command:

myserver:/myprogressdb:> proshut mydb -C list

usr pid time of login user id tty Limbo?

165 20352 Tue Jun 21 08:35:32 2009 SOMEUSER w1c3xpr3 no


myserver:/myprogressdb:> proshut mydb -C disconnect 165

User 165 disconnect initiated. (6796)

You can also disconnect users and shutdown databases with the promon menu, selecting the option 8:

myserver:/myprogressdb:> promon mydb.db

PROGRESS MONITOR Version 9

Database: /myprogressdb/mydb

1. User Control
2. Locking and Waiting Statistics
3. Block Access
4. Record Locking Table
5. Activity
6. Shared Resources
7. Database Status
8. Shut Down Database

T. Transactions Control
L. Resolve Limbo Transactions
C. Coordinator Information

M. Modify Defaults
Q. Quit

Enter your selection: 8
usr pid time of login user id tty Limbo?
11 24122 Sat May 21 10:23:29 2009 biw no
12 24247 Sat May 21 10:23:30 2009 wdog no
13 14898 Wed Jun 29 08:25:08 2009 MYUSER0001 /dev/pts/1 no
100 18648 Tue Jun 28 11:30:09 2009 MYUSER0002 w1c3xpr3 no
104 15128 Fri Jun 24 11:51:34 2009 MYUSER0003 w1c3xpr4 no
106 13556 Fri Jun 24 11:52:02 2009 MYUSER0004 w1c3xpr4 no
115 20344 Tue Jun 21 13:30:10 2009 MYUSER0003 w1c3xpr4 no
122 16948 Tue Jun 28 10:18:10 2009 MYUSER0005 w1c3xpr3 no
131 11128 Tue Jun 28 09:09:33 2009 MYUSER0006 w1c3xpr3 no
138 13252 Tue Jun 28 09:45:05 2009 MYUSER0007 w1c3xpr3 no
139 9828 Tue Jun 28 09:00:06 2009 MYUSER0008 w1c3xpr4 no
146 11708 Tue Jun 28 08:51:51 2009 MYUSER0009 w1c3xpr3 no
150 10640 Tue Jun 28 08:48:59 2009 MYUSER0010 w1c3xpr3 no
151 15464 Tue Jun 28 08:48:44 2009 MYUSER0001 w1c3xpr4 no

RETURN - show remaining, Q - quit:
usr pid time of login user id tty Limbo?
190 10708 Wed Jun 29 07:21:35 2009 MYUSER0011 w1c3xpr4 no

1 Disconnect a User
2 Unconditional Shutdown
3 Emergency Shutdown (Kill All)
x Exit

Enter choice>

If there is a lot of users you can see all pressing RETURN, and after showing all users you can choose to shutdown the database (option 2), or kill some user with his usr number (option 1).

Tuesday, June 21, 2011

Oracle exp and imp

Working with exp and imp, I prefer creating configuration files rather than writing everything at the command prompt because I can have a "template" and modify it as needed. Therefore, this is my template for exporting some schemas (not the full database) with data, putting the log file and dump file in /somedir:

BUFFER=1048576
FILE=/somedir/export.dmp
LOG=/somedir/export.log
STATISTICS=NONE
FULL=N
ROWS=Y
OWNER=SOMEUSER,OTHERUSER
DIRECT=Y

The template for exporting a full database is a bit shorter:

BUFFER=1048576
FILE=/somedir/export.dmp
LOG=/somedir/export.log
STATISTICS=NONE
FULL=Y
ROWS=Y
DIRECT=Y

On the other hand, for importing some schemas (from a full or partial export) I use this configuration file:

BUFFER=1048576
FILE=/somedir/export.dmp
LOG=/somedir/export.log
STATISTICS=NONE
FULL=N
ROWS=Y
feedback=10000
commit=n
IGNORE=Y
fromuser=SOMEUSER,OTHERUSER
touser=SOMEUSER,OTHERUSER
CONSTRAINTS=Y
grants=yes

And my template for importing a full database is the following:

BUFFER=1048576
FILE=/somedir/export.dmp
LOG=/somedir/export.log
STATISTICS=NONE
FULL=Y
ROWS=Y
feedback=10000
commit=n
IGNORE=Y

And just to not forget setting the target database name and character settings (very important), I use this little shell script:

#export ORACLE_SID=mydb
#export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

imp / parfile=/somedir/import.par

Having the more important parameters in your template you can easily change it without checking imp/exp help or documentation, that's why some parameters are included with their default values.

More information:

Import Export FAQ
Checking database character set

Monday, June 20, 2011

Copying files between servers with compression

You may have to copy very big files or file systems between servers sometimes, and the more you optimize this copy the faster you will transfer the files with less computer resources usage.

You might think that scp alone would do a good job, but without compression you may be wasting bandwith and time (talking about copying file systems of hundreds of gigabytes between servers far away one of the other). But if you have a lot of disc space available, why not to compress the file system first and then transfer it? Well, because compressing big file systems takes a lot of time and your disc I/O is severely degraded (the server reads a file and tries to write it compressed at the same time); you might be render your server practically unusable while compressing big files, at least in AIX.

But you can make a backup, compress it and send it to the other server, uncompressing it and writing just the files you want to transfer (not writing compressed files on disk at any time), compressing it on the fly with pipes. To this end, you may write this little script and put it in an appropriate path in the source server:

tar -cf - $1 | compress -c

Then, if you named this script mycompressedtar for example, you can transfer files or directories to your target server this way:

otheruser@targetsrv> ssh -l myuser sourcesrv /path/mycompressedtar /some/dir | uncompress -c | tar -xf -

If you issue this command at your target server, you will copy /some/dir from source server to the path where you stand at execution time; of course you have to have access to /some/dir with the myuser account.

But some times you have few but very big files (like Oracle datafiles), and you cannot transfer them with tar without getting an error in AIX; in that case you can just transfer file by file with this little script:

cat $1 | compress -c

Then, if you named this script mycompress for example, you can copy one file to your target server this way:

otheruser@targetsrv> ssh -l myuser sourcesrv /path/mycompress /some/file | uncompress -c > somefile

If you issue this command at your target server, you will copy file from source server to the path where you stand at execution time and name it somefile; you can use the same name as the source file or a different one.

But we're talking about different servers, and the files are so valuable we cannot afford to change a single byte. How can we be sure the files are exactly the same? In AIX, you can use the cksum command:

myuser@sourcesrv> cksum myfile.txt
1656934735 2229 myfile.txt

===================================

otheruser@targetsrv> cksum myfile.txt
1656934735 2229 myfile.txt

This way you'll be sure both files are equal.

More information:

cksum Command

Wednesday, June 15, 2011

Creating an Oracle physical standby database

Data Guard is an Oracle technology that enables you to replicate one database and keep it up to date according to selected levels of performance and protection; you can have one or more standby databases that you can open in read-only mode for reporting purposes (physical standby) or in read-write mode if you don't change the replicated data (logical standby). Data Guard is a very useful technology for disaster recovery and more so be sure to read all the documentation listed at the end of this post. In the following example is described the easiest scenario: creating a physical standby database in a different server from the primary database, with the same paths, in maximum performance mode, copying the primary database with operating system commands (no ASM).

This example is written in a way you can create quickly and easily a standby database, but don't be mislead by this and skip reading the documentation; the more you study Data Guard technology the less problems you will have trying different scenarios and configurations and troubleshooting your standby database.

The first and most difficult part of creating a standby database is configuring the primary database, therefore we will begin activating the force logging at the primary database:

oracle@primarysrv:/opt/oracle/product/10.2.0/db_1/dbs$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Tue Jun 7 12:49:25 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options

SQL> select FORCE_LOGGING from v$database;

FOR
---
NO

SQL> ALTER DATABASE FORCE LOGGING;

Database altered.

SQL> select FORCE_LOGGING from v$database;

FOR
---
YES

Next we will get redo log file information as number of groups, threads, members' path and size, and we will create standby log files the same size as log files, adding an extra group for each thread for standby log files:

SQL> select GROUP#, MEMBER from v$logfile order by GROUP#;

GROUP# MEMBER
---------- ----------------------------------------
1 /primarydb/logfile/redo01.rdo
1 /fra/primarydb/logfile/redo01.rdo
2 /primarydb/logfile/redo02.rdo
2 /fra/primarydb/logfile/redo02.rdo
3 /primarydb/logfile/redo03.rdo
3 /fra/primarydb/logfile/redo03.rdo

6 rows selected.

SQL> select GROUP#, THREAD#, BYTES from v$log order by GROUP#;

GROUP# THREAD# BYTES
---------- ---------- ----------
1 1 134217728
2 1 134217728
3 1 134217728

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 ('/primarydb/logfile/stdby01.rdo',
'/fra/primarydb/logfile/stdby01.rdo') SIZE 134217728;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 ('/primarydb/logfile/stdby02.rdo',
'/fra/primarydb/logfile/stdby02.rdo') SIZE 134217728;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 ('/primarydb/logfile/stdby03.rdo',
'/fra/primarydb/logfile/stdby03.rdo') SIZE 134217728;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 ('/primarydb/logfile/stdby04.rdo',
'/fra/primarydb/logfile/stdby04.rdo') SIZE 134217728;

Database altered.

SQL> SELECT GROUP#, THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$STANDBY_LOG;

GROUP# THREAD# SEQUENCE# ARC STATUS
---------- ---------- ---------- --- ----------
4 0 0 YES UNASSIGNED
5 0 0 YES UNASSIGNED
6 0 0 YES UNASSIGNED
7 0 0 YES UNASSIGNED

After creating standby log file groups we will check if the database is in archive mode, has a password file and the remote_login_passwordfile parameter is properly set; if not we will create a password file and activate the archive mode:

SQL> show parameter REMOTE_LOGIN_PASSWORDFILE

NAME TYPE VALUE
------------------------------------ -------------------- --------------
remote_login_passwordfile string EXCLUSIVE

SQL> select * from V$PWFILE_USERS;

no rows selected

SQL> select NAME, LOG_MODE from v$database;

NAME LOG_MODE
--------- ------------
primarydb NOARCHIVELOG

SQL> SHUTDOWN IMMEDIATE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options
oracle@primarysrv:~$ cd $ORACLE_HOME/dbs
oracle@primarysrv:/opt/oracle/product/10.2.0/db_1/dbs$ orapwd FILE=orapwprimarydb password="somepassword"
oracle@primarysrv:/opt/oracle/product/10.2.0/db_1/dbs$ sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 10 11:34:18 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options

SQL> STARTUP MOUNT;
ORACLE instance started.

Total System Global Area 268435456 bytes
Fixed Size 1218892 bytes
Variable Size 92276404 bytes
Database Buffers 171966464 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> ALTER DATABASE ARCHIVELOG;

Database altered.

SQL> ALTER DATABASE OPEN;

Database altered.

SQL> select NAME, LOG_MODE from v$database;

NAME LOG_MODE
--------- ------------
primarydb ARCHIVELOG

SQL> alter user SYS identified by "mypassword";

User altered.

SQL> select * from V$PWFILE_USERS;

USERNAME SYSDB SYSOP
------------------------------ ----- -----
SYS TRUE TRUE

In the previous step we created first a password file with a password and later changed the SYS password; in order to work the redo transport services both primary and standby databases have to have the same SYS password and a password file, therefore if there is no password file at the primary database you can first create it and later change the SYS password, create the password file with the proper password or just change the SYS password if it exist. But as long as you cannot open a physical standby database in read-write mode without turning it unsuitable for Data Guard purposes, you cannot change the SYS password with an ALTER USER statement so you have to create the standby database password file with the proper password.

Next we will set the parameters needed for a Data Guard configuration; you have to check all the 19 parameters listed below and change them as needed, and also you may have to check other parameters not listed related to server address or file system path.

SQL> column NAME format a30
SQL> column VALUE format a40
SQL> select NAME, VALUE from v$parameter where NAME in ('db_name','db_unique_name',
'log_archive_config','control_files','log_archive_dest','log_archive_dest_1',
'log_archive_dest_2','log_archive_dest_state_1','log_archive_dest_state_2',
'remote_login_passwordfile','log_archive_format','log_archive_max_processes','fal_server',
'fal_client','db_file_name_convert','log_file_name_convert','standby_file_management',
'db_recovery_file_dest','db_recovery_file_dest_size') order by NAME;

NAME VALUE
------------------------------ ----------------------------------------
control_files /primarydb/controlfile/primarydb01.ctl
, /fra/primarydb/controlfile/primarydb02.c
tl

db_file_name_convert
db_name primarydb
db_recovery_file_dest
db_recovery_file_dest_size 0
db_unique_name primarydb
fal_client
fal_server
log_archive_config
log_archive_dest /fra/archfile/
log_archive_dest_1
log_archive_dest_2
log_archive_dest_state_1 enable
log_archive_dest_state_2 enable
log_archive_format primarydb_%t_%s_%r.arc
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_file_management MANUAL

19 rows selected.

SQL> alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(primarydb,standbydb)';

System altered.

SQL> alter system set log_archive_dest='';

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_1='LOCATION=/fra/primarydb/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=primarydb';

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_2='SERVICE=standbydb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=standbydb';

System altered.

SQL> alter system set FAL_SERVER=standbydb;

System altered.

SQL> alter system set FAL_CLIENT=primarydb;

System altered.

SQL> alter system set standby_file_management=AUTO;

System altered.

SQL> select NAME, VALUE from v$parameter where NAME in ('db_name','db_unique_name',
'log_archive_config','control_files','log_archive_dest','log_archive_dest_1',
'log_archive_dest_2','log_archive_dest_state_1','log_archive_dest_state_2',
'remote_login_passwordfile','log_archive_format','log_archive_max_processes','fal_server',
'fal_client','db_file_name_convert','log_file_name_convert','standby_file_management',
'db_recovery_file_dest','db_recovery_file_dest_size') order by NAME;

NAME VALUE
------------------------------ ----------------------------------------
control_files /primarydb/controlfile/primarydb01.ctl
, /fra/primarydb/controlfile/primarydb02.c
tl

db_file_name_convert
db_name primarydb
db_recovery_file_dest
db_recovery_file_dest_size 0
db_unique_name primarydb
fal_client primarydb
fal_server standbydb
log_archive_config DG_CONFIG=(primarydb,standbydb)
log_archive_dest
log_archive_dest_1 LOCATION=/fra/primarydb/archivel
og VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB
_UNIQUE_NAME=primarydb

log_archive_dest_2 SERVICE=standbydb LGWR ASYNC VALID_FOR=(O
NLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_N
AME=standbydb

log_archive_dest_state_1 enable
log_archive_dest_state_2 enable
log_archive_format primarydb_%t_%s_%r.arc
log_archive_max_processes 2
log_file_name_convert
remote_login_passwordfile EXCLUSIVE
standby_file_management AUTO

19 rows selected.

You have to set values like above for your relevant Data Guard database parameters. After having done this, you are almost ready to copy your database to the standby server along with the database's pfile and a special control file:

SQL> SHUTDOWN IMMEDIATE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options
oracle@primarysrv:~$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jun 8 12:18:44 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> STARTUP MOUNT;
ORACLE instance started.

Total System Global Area 268435456 bytes
Fixed Size 1218892 bytes
Variable Size 92276404 bytes
Database Buffers 171966464 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/primarydb/standbydb.ctl';

Database altered.

SQL> CREATE PFILE='/primarydb/initstandbydb.ora' FROM SPFILE;

File created.

SQL> SHUTDOWN IMMEDIATE;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining Scoring Engine options

At this point you can copy all the relevant database files (datafiles, tempfiles, online redo logs, standby redo logs, standby control file and pfile) with your favorite method and programs, like scp. Don't forget to place all the files at the same path, this example assumes that. If you put the database files in a different path, you have to set parameters like db_file_name_convert and log_file_name_convert (not covered in this post). Also, you have to create directories needed for parameters like background_dump_dest, core_dump_dest, user_dump_dest and audit_file_dest amongst others.

After that, you have to copy your standby control file to all the paths stated in the control_files parameter, using the same file names; you cannot use the original (primary) control file or you will get an error.

Finally, you have to modify some parameters in the standby pfile related to this new database; it would be handy to have the original primary pfile and the standby pfile and compare it as in the following example, just to be sure you modified everything properly. In this standby pfile there are no primarydb.__ parameters because they are related to parameters managed automatically by the Oracle instance but saved anyway with the CREATE PFILE statement, and the audit_trail parameter is set to OS, otherwise you need to open a database in read-write mode in order to save all the audit events at the database itself (DB value), but you cannot do so with a physical standby database. Also, there is the extra db_unique_name parameter with the name of the standby database, that by default is optional and the same as db_name (in this case primarydb) but mandatory in a standby database.

oracle@standbysrv:/primarydb$ diff initstandbydb.ora initprimarydb.ora
0a1,5
> primarydb.__db_cache_size=171966464
> primarydb.__java_pool_size=4194304
> primarydb.__large_pool_size=4194304
> primarydb.__shared_pool_size=83886080
> primarydb.__streams_pool_size=0
3,4c8,9
< *.audit_trail='os'
---
> *.audit_trail='db'
14,16c19,20
< *.db_unique_name='standbydb'
< *.fal_client='standbydb'
< *.fal_server='primarydb'
---
> *.fal_client='primarydb'
> *.fal_server='standbydb'
22,23c26,27
< *.log_archive_dest_1='LOCATION=/fra/primarydb/standbydb/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=standbydb'
< *.log_archive_dest_2='SERVICE=primarydb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=primarydb'
---
> *.log_archive_dest_1='LOCATION=/fra/primarydb/primarydb/archivelog VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=primarydb'
> *.log_archive_dest_2='SERVICE=standbydb LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=standbydb'

But before starting up your primary and standby databases, first you have to include in the tnsnames.ora files the primary and standby instance connection information like this, both in the primary and standby servers:

PRIMARYDB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = primarysrv)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = primarydb))
)

STANDBYDB =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = standbysrv)(PORT = 1521))
)
(CONNECT_DATA = (SERVICE_NAME = standbydb))
)

Also, you have to have listeners configured for both primary and standby instances (change the listener.ora files as needed):

LISTENER_PRIMARYDB =(ADDRESS_LIST = (ADDRESS= (PROTOCOL=TCP) (HOST = primarysrv) (PORT = 1521)))

SID_LIST_LISTENER_PRIMARYDB =
(SID_LIST =
(SID_DESC =
(SID_NAME = primarydb)
(ORACLE_HOME = /opt/oracle/product/10.2.0/db_1)
(ORACLE_SID = primarydb)
)
)

CONNECT_TIMEOUT_LISTENER_PRIMARYDB=30

Start both primary and standby database listeners and check it with the tnsping command, in both primary and standby servers:

oracle@standbysrv:/primarydb$ tnsping primarydb

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 17-JUN-2011 12:23:42

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:
/opt/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = primarysrv)(PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = primarydb)))
OK (10 msec)
oracle@standbysrv:/primarydb$ tnsping standbydb

TNS Ping Utility for Linux: Version 10.2.0.1.0 - Production on 17-JUN-2011 12:23:42

Copyright (c) 1997, 2005, Oracle. All rights reserved.

Used parameter files:
/opt/oracle/product/10.2.0/db_1/network/admin/sqlnet.ora


Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = standbysrv)(PORT = 1521))
(CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = standbydb)))
OK (0 msec)

At this point you will be able to create a password file and start up your standby database:

oracle@standbysrv:/primarydb$ export ORACLE_SID=standbydb
oracle@standbysrv:/primarydb$ set|grep ORA
ORACLE_BASE=/opt/oracle
ORACLE_HOME=/opt/oracle/product/10.2.0/db_1
ORACLE_SID=standbydb
oracle@standbysrv:/primarydb$ cd $ORACLE_HOME/dbs
oracle@standbysrv:/opt/oracle/product/10.2.0/db_1/dbs$ orapwd FILE=orapwstandbydb password="mypassword"
oracle@standbysrv:/opt/oracle/product/10.2.0/db_1/dbs$ sqlplus / as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jun 8 17:15:59 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> create spfile from pfile='/primarydb/initstandbydb.ora';

File created.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 268435456 bytes
Fixed Size 1218892 bytes
Variable Size 88082100 bytes
Database Buffers 176160768 bytes
Redo Buffers 2973696 bytes
Database mounted.

If everything was done properly at the next step the standby database will contact the primary database and start the data sincronization, but first you have to start up your primary database as usual (open in read-write mode). Then, you start the data sincronization at the standby database:

SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> select name, open_mode, database_role from v$database;

NAME OPEN_MODE DATABASE_ROLE
--------- ---------- ----------------
PRIMARYDB MOUNTED PHYSICAL STANDBY

And also you force a log change at the primary database in order to begin the transfer:

SQL> ALTER SYSTEM SWITCH LOGFILE;

System altered.

SQL> SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;

SEQUENCE# FIRST_TIME NEXT_TIME
---------- ----------- -----------
6 07-jun-2011 08-jun-2011
7 08-jun-2011 08-jun-2011
8 08-jun-2011 10-jun-2011
9 10-jun-2011 10-jun-2011

After waiting a bit you should see the same information at the standby database:

SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;

SEQUENCE# FIRST_TIME NEXT_TIME
---------- ------------------ ------------------
6 07-JUN-11 08-JUN-11
7 08-JUN-11 08-JUN-11
8 08-JUN-11 10-JUN-11
9 10-JUN-11 10-JUN-11

4 rows selected.

If you do, congratulations! Everything went fine and you now have a working standby physical database. As a nice extra, you can set automatic deletion of applied redo logs when your archive log file system is almost full, and also check your standby database opening it in read-only mode.

Let's configure automatic deletion of applied redo logs setting the following parameters at the standby database:

SQL> alter system set DB_RECOVERY_FILE_DEST_SIZE=2G;

System altered.

SQL> alter system set DB_RECOVERY_FILE_DEST='/fra/primarydb';

System altered.

SQL> alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST';

System altered.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

Set the DB_RECOVERY_FILE_DEST_SIZE a little below the limit of the DB_RECOVERY_FILE_DEST file system; remember that any file not placed automatically there by or not related to Oracle doesn't count towards the limit set in the DB_RECOVERY_FILE_DEST_SIZE parameter. And to finish this step, you have to issue the CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY statement in RMAN both at the primary and standby servers:

oracle@primarysrv:/opt/oracle/product/10.2.0/db_1/dbs$ rman

Recovery Manager: Release 10.2.0.1.0 - Production on Fri Jun 10 12:57:46 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> connect target /

connected to target database: primarydb (DBID=3023816100)

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

using target database control file instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters are successfully stored

RMAN> show all;

RMAN configuration parameters are:
...
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
...

RMAN> exit


Recovery Manager complete.

========================================

oracle@standbysrv:/opt/oracle/product/10.2.0/db_1/dbs$ rman

Recovery Manager: Release 10.2.0.1.0 - Production on Fri Jun 10 12:54:49 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

RMAN> connect target /

connected to target database: primarydb (DBID=3023816100, not open)

RMAN> CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;

using target database control file instead of recovery catalog
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
new RMAN configuration parameters are successfully stored

RMAN> SHOW ALL;

RMAN configuration parameters are:
...
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON STANDBY;
...
RMAN> exit


Recovery Manager complete.

We have almost finished! Now we will check that the standby database can be opened in read-only mode (for reporting purposes for example) opening it and issuing a simple query:

oracle@standbysrv:/opt/oracle/product/10.2.0/db_1/dbs$ sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 10 13:16:24 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> alter database recover managed standby database cancel;

Database altered.

SQL> alter database open;

Database altered.

SQL> select count(*) from dba_objects;

COUNT(*)
----------
9715

SQL> select name, open_mode from v$database;

NAME OPEN_MODE
--------------------------- ------------------------------
primarydb READ ONLY

At this time you can query tables regarding to your own schemas if you know them.

Finally, you will restart the synchronization at the standby database:

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
oracle@standbysrv:/opt/oracle/product/10.2.0/db_1/dbs$ sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.1.0 - Production on Fri Jun 10 13:20:46 2011

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 268435456 bytes
Fixed Size 1218892 bytes
Variable Size 88082100 bytes
Database Buffers 176160768 bytes
Redo Buffers 2973696 bytes
Database mounted.
SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> select sequence#, first_time, next_time from v$archived_log order by sequence#;

SEQUENCE# FIRST_TIME NEXT_TIME
---------- ------------------ ------------------
6 07-JUN-11 08-JUN-11
7 08-JUN-11 08-JUN-11
8 08-JUN-11 10-JUN-11
9 10-JUN-11 10-JUN-11
10 10-JUN-11 10-JUN-11
11 10-JUN-11 10-JUN-11
12 10-JUN-11 10-JUN-11

7 rows selected.

Don't forget to force a change log at the primary database in order to check the redo log transfer.

And that's all! We have finished creating and checking an Oracle physical standby database; just don't forget to read the following documentation (or similar with 11g):

Oracle Data Guard Concepts and Administration 10g Release 2 (10.2)
Creating and Maintaining a Password File
Maintenance Of Archivelogs On Standby Databases

Getting triggers information

If you want to know information about triggers owned by someone, triggers linked to a table or a trigger's code, you can query the dba_triggers table:

SQL> set linesize 150
SQL> set wrap on
SQL> select OWNER, TRIGGER_NAME, TRIGGER_TYPE, TABLE_OWNER, TABLE_NAME from dba_triggers
where owner in ('SOMEUSER') order by OWNER, TRIGGER_NAME;

OWNER TRIGGER_NAME TRIGGER_TYPE TABLE_OWNER TABLE_NAME
------------------ -------------------- ---------------- ------------------ ------------------
SOMEUSER SOMETRIGGER AFTER EACH ROW SOMEUSER SOMETABLE
SOMEUSER ATRIGGER AFTER EACH ROW SOMEUSER OTHERTABLE
SOMEUSER OTHERTRIGGER AFTER EACH ROW SOMEUSER ATABLE
SOMEUSER MYTRIGGER AFTER EACH ROW SOMEUSER MYTABLE

4 rows selected.

SQL> select TABLE_OWNER, TABLE_NAME, OWNER, TRIGGER_NAME, TRIGGER_TYPE from dba_triggers
where TABLE_OWNER in ('SOMEUSER') order by TABLE_OWNER, TABLE_NAME;

TABLE_OWNER TABLE_NAME OWNER TRIGGER_NAME TRIGGER_TYPE
------------------ ------------------ ------------------ -------------------- ----------------
SOMEUSER SOMETABLE SOMEUSER TRIGGER_ONE BEFORE STATEMENT
SOMEUSER SOMETABLE SOMEUSER TRIGGER_TWO AFTER STATEMENT
SOMEUSER SOMETABLE SOMEUSER SOMETRIGGER AFTER EACH ROW
SOMEUSER OTHERTABLE SOMEUSER TRIGGER_THREE AFTER STATEMENT

4 rows selected.

SQL> set long 20000
SQL> select TRIGGER_BODY from dba_triggers where OWNER='SOMEUSER' and TRIGGER_NAME='SOMETRIGGER';

TRIGGER_BODY
--------------------------------------------------------------------------------
BEGIN
IF INSERTING THEN
log_id ('', NEW.id);
ELSE
log_id (OLD.id, NEW.id);
END IF;
END;

Tuesday, June 14, 2011

Exporting schemas with invalid objects

Let's say that you want to copy one schema from some Oracle database to other Oracle database; you first export that schema with exp without warnings:

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links

...

. exporting post-schema procedural objects and actions
. exporting user history table
. exporting default and system auditing options
. exporting statistics
Export terminated successfully without warnings.

Then you import it in your target database, again without warnings:

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning option

Export file created by EXPORT:V10.02.01 via direct path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export client uses US7ASCII character set (possible charset conversion)
. importing SOMEUSER's objects into SOMEUSER
. . importing table "TABLE_ONE"
0 rows imported
. . importing table "TABLE_TWO"
0 rows imported
. . importing table "TABLE_THREE"
22 rows imported

...

. . importing table "TABLE_SIXTY_ONE"
0 rows imported
. . importing table "TABLE_SIXTY_TWO"
1 rows imported
. . importing table "TABLE_SIXTY_THREE"
0 rows imported
Import terminated successfully without warnings.

Just to be sure, you count the number of objects of that schema in both databases, just to find out that there are less objects in your target database:

SQL> select OWNER, count(*) from dba_objects where OWNER in ('SOMEUSER')
group by OWNER order by OWNER;

OWNER COUNT(*)
------------------------------ ----------
SOMEUSER 241

=====================================================

SQL> select OWNER, count(*) from dba_objects where OWNER in ('SOMEUSER')
group by OWNER order by OWNER;

OWNER COUNT(*)
------------------------------ ----------
SOMEUSER 205

This is a good time to check and recompile invalid objects so you run the utlrp script in your source database and query the dba_objects table again:

@?/rdbms/admin/utlrp.sql;

...

DOC> The following query reports the number of objects that have compiled
DOC> with errors (objects that compile with errors have status set to 3 in
DOC> obj$). If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
2

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
1


PL/SQL procedure successfully completed.

SQL> column OBJECT_NAME format a50
SQL> select OWNER, OBJECT_TYPE, OBJECT_NAME from dba_objects where STATUS='INVALID'
order by OWNER, OBJECT_TYPE, OBJECT_NAME;

OWNER OBJECT_TYPE OBJECT_NAME
------------------------------ ------------------- -----------------------------
SOMEUSER PROCEDURE SOME_PROCEDURE
PERFSTAT PACKAGE BODY STATSPACK
SYS PACKAGE BODY DBMS_SWRF_REPORT_INTERNAL

Now you know why there are less objects in your target database, the invalid objects where not exported. You might want to check the procedure's errors and fix them before doing the export/import operation again.

Monday, June 13, 2011

How to enable partitioning in Oracle

Let's say that you exported some data with exp of some database with no errors or warnings, and when you're importing it in other database you get an ORA-00439 error:

Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production

Export file created by EXPORT:V10.02.01 via direct path
import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
export client uses US7ASCII character set (possible charset conversion)
. importing MYUSER's objects into MYUSER
. . importing table "MYTABLE"
0 rows imported
. . importing table "ATABLE"
0 rows imported
. . importing table "ONETABLE"
22 rows imported
IMP-00017: following statement failed with ORACLE error 439:
"CREATE TABLE "BADTABLE" ("LOGDATE" CHAR(8), "SOMEINFO" CHAR(6)
"ENABLE) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 TABLESPACE "MYTABLES"
"PACE" LOGGING PARTITION BY RANGE ("LOGDATE" ) (PARTITION "ONE""
" VALUES LESS THAN ('20050301') PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 2"
"55 STORAGE(INITIAL 655360 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT"
") TABLESPACE "MYTABLESPACE" LOGGING NOCOMPRESS, PARTITION "TWO" VALUE"
"S LESS THAN ('20040301') PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STO"
"RAGE(INITIAL 655360 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABL"
"ESPACE "MYTABLESPACE" LOGGING NOCOMPRESS )"
IMP-00003: ORACLE error 439 encountered
ORA-00439: feature not enabled: Partitioning

This error means you have to enable partitioning in the Oracle Database engine in order to use this feature, and if you didn't so at installation time you can enable it with this commands:

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk part_on
make -f ins_rdbms.mk ioracle

It would be better to shut down all Oracle instances before doing this; below there is an example of enabling partitioning in an Oracle Database 11g engine and checking it:

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk part_on
/usr/bin/ar d /oracle_11g/product/11.1.0/rdbms/lib/libknlopt.a ksnkkpo.o
/usr/bin/ar cr /oracle_11g/product/11.1.0/rdbms/lib/libknlopt.a /oracle_11g/product/11.1.0/rdbms/lib/kkpoban.o
$ make -f ins_rdbms.mk ioracle
chmod 755 /oracle_11g/product/11.1.0/bin

- Linking Oracle
rm -f /oracle_11g/product/11.1.0/rdbms/lib/oracle
gcc -o /oracle_11g/product/11.1.0/rdbms/lib/oracle -L/oracle_11g/product/11.1.0/rdbms/lib/ -L/oracle_11g/product/11.1.0/lib/ -L/oracle_11g/product/11.1.0/lib/stubs/ -Wl,-E /oracle_11g/product/11.1.0/rdbms/lib/opimai.o /oracle_11g/product/11.1.0/rdbms/lib/ssoraed.o /oracle_11g/product/11.1.0/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv11 -Wl,--no-whole-archive /oracle_11g/product/11.1.0/lib/nautab.o /oracle_11g/product/11.1.0/lib/naeet.o /oracle_11g/product/11.1.0/lib/naect.o /oracle_11g/product/11.1.0/lib/naedhs.o /oracle_11g/product/11.1.0/rdbms/lib/config.o -lserver11 -lodm11 -lcell11 -lnnet11 -lskgxp11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 -lknlopt `if /usr/bin/ar tv /oracle_11g/product/11.1.0/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap11" ; fi` -lslax11 -lpls11 -lrt -lplp11 -lserver11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 `if [ -f /oracle_11g/product/11.1.0/lib/libavserver11.a ] ; then echo "-lavserver11" ; else echo "-lavstub11"; fi` `if [ -f /oracle_11g/product/11.1.0/lib/libavclient11.a ] ; then echo "-lavclient11" ; fi` -lknlopt -lslax11 -lpls11 -lrt -lplp11 -ljavavm11 -lserver11 -lwwg `cat /oracle_11g/product/11.1.0/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /oracle_11g/product/11.1.0/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lztkg11 -lmm -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lztkg11 `cat /oracle_11g/product/11.1.0/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /oracle_11g/product/11.1.0/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lztkg11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `if /usr/bin/ar tv /oracle_11g/product/11.1.0/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo11"; fi` -L/oracle_11g/product/11.1.0/ctx/lib/ -lctxc11 -lctx11 -lzx11 -lgx11 -lctx11 -lzx11 -lgx11 -lordimt11 -lclsra11 -ldbcfg11 -lhasgen11 -lcore11 -lskgxn2 -locr11 -locrb11 -locrutl11 -lhasgen11 -lcore11 -lskgxn2 -loraz -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lsnls11 -lunls11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -laio `cat /oracle_11g/product/11.1.0/lib/sysliblist` -Wl,-rpath,/oracle_11g/product/11.1.0/lib -lm `cat /oracle_11g/product/11.1.0/lib/sysliblist` -ldl -lm -L/oracle_11g/product/11.1.0/lib
test ! -f /oracle_11g/product/11.1.0/bin/oracle ||\
mv -f /oracle_11g/product/11.1.0/bin/oracle /oracle_11g/product/11.1.0/bin/oracleO
mv /oracle_11g/product/11.1.0/rdbms/lib/oracle /oracle_11g/product/11.1.0/bin/oracle
chmod 6751 /oracle_11g/product/11.1.0/bin/oracle
$ sqlplus / as sysdba

SQL*Plus: Release 11.1.0.7.0 - Production on Thu Jun 9 17:33:48 2011

Copyright (c) 1982, 2008, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning option

SQL> column PARAMETER format a30
SQL> column VALUE format a30
SQL> select * from v$option where parameter = 'Partitioning';

PARAMETER VALUE
------------------------------ ------------------------------
Partitioning TRUE

More information:

ORA-00439: feature not enabled: Partitioning

Wednesday, June 8, 2011

Progress instances dying and read-only filesystems

Some day a user tells you one (or a lot) of your Progress database instances is down so you check first if it is running with ps and proutil commands. The instance is in fact down, therefore you check the database log just to find nothing. You try to start up the instance with proserve, and you finally realize why the instance died: the database's filesystem changed to read-only mode.

This might be usual if you have a misconfigured remote filesystem or if you use it heavily, for example iSCSI, and it has to be solved by the sysadmin in charge of that server.

[root@myserver]# ps ax|grep mydb
18187 pts/2 S+ 0:00 grep mydb
[root@myserver]# proutil /progdbs/mydb.db -C holder
PROGRESS Version 9.1E as of Tue Oct 12 14:40:19 EDT 2004

** Cannot find or open file /progdbs/mydb.lk, errno = 17. (43)
[root@myserver]# cd /progdbs
[root@myserver]# tail --lines=10 mydb.lg
14:59:09 SRV 1: Previous message sent on behalf of user 183. (5512)
14:59:10 SRV 1: User is SOMEUSER. (708)
14:59:10 SRV 1: Previous message sent on behalf of user 183. (5512)
14:59:16 SRV 1: Logout usernum 183, userid SOMEUSER, on W1C3XDV1. (739)
14:59:44 SRV 1: Login usernum 183, userid ANUSER, on W1C3XDV1. (742)
14:59:44 SRV 1: User is OTHERUSER. (708)
14:59:44 SRV 1: Previous message sent on behalf of user 183. (5512)
14:59:44 SRV 1: User is OTHERUSER. (708)
14:59:44 SRV 1: Previous message sent on behalf of user 183. (5512)
14:59:44 SRV 1: User is OTHERUSER. (708)
14:59:44 SR
[root@myserver]# proserve /progdbs/mydb -pf /progdbs/mydb.pf
PROGRESS Version 9.1E as of Tue Oct 12 14:40:19 EDT 2004
10:23:38 BROKER : ** Cannot find or open file /progdbs/mydb.lk, errno = 17. (43)
10:23:38 BROKER : ** This process terminated with exit code 2. (8619)
[root@myserver]# ls -la /progdbs/mydb.lk
-r--r--r-- 1 root root 38 Dec 21 11:44 /progdbs/mydb.lk
[root@myserver]# touch /progdbs/hello.txt
touch: cannot touch `/progdbs/hello.txt': Read-only file system

Monday, June 6, 2011

New memory and dump parameters in Oracle 11g

If you want to migrate a database from Oracle 10g to Oracle 11g, in general you can dump all the database parameters with a CREATE PFILE sentence and start your new Oracle 11g instance with that pfile, but it would be better to substitute some deprecated parameters with the new Oracle 11g parameters; it would take just a couple of minutes and you won't get annoying messages at startup about deprecated parameters:

SQL> startup nomount pfile=initmydb.ora
ORA-32006: BACKGROUND_DUMP_DEST initialization parameter has been deprecated
ORA-32006: REMOTE_OS_AUTHENT initialization parameter has been deprecated
ORA-32006: USER_DUMP_DEST initialization parameter has been deprecated
ORACLE instance started.

Total System Global Area 267227136 bytes
Fixed Size 2158832 bytes
Variable Size 134221584 bytes
Database Buffers 125829120 bytes
Redo Buffers 5017600 bytes

First of all, the remote_os_authent parameter is deprecated in 11g so you can just drop it. And in Oracle 11g, you can substitute the parameters background_dump_dest, core_dump_dest and user_dump_dest with the new diagnostic_dest parameter, that by default is derived from the value of the $ORACLE_BASE environment variable. Finally, you can set a memory usage target for the whole instance (PGA and SGA) with the new memory_target parameter, and also a memory usage limit with the memory_max_target parameter.

In this example, all six parameters are set for this Oracle 10g instance:

SQL> select NAME, VALUE from v$parameter where name in ('pga_aggregate_target','sga_target',
'sga_max_size','user_dump_dest','background_dump_dest','core_dump_dest') order by NAME;

NAME VALUE
------------------------------ --------------------------------------------------
background_dump_dest /oracle_10g/app/oracle/admin/mydb/bdump
core_dump_dest /oracle_10g/app/oracle/admin/mydb/cdump
pga_aggregate_target 100663296
sga_max_size 268435456
sga_target 268435456
user_dump_dest /oracle_10g/app/oracle/admin/mydb/udump

6 rows selected.

In this 11g instance, just the diagnostic_dest and memory_target parameters where defined, and the other parameters where set by Oracle and derived from the former two parameters:

SQL> select NAME, VALUE from v$parameter where name in ('pga_aggregate_target','sga_target',
'sga_max_size','memory_target','memory_max_target','user_dump_dest','background_dump_dest',
'diagnostic_dest','core_dump_dest') order by NAME;

NAME VALUE
------------------------------ ------------------------------------------------------------
background_dump_dest /oracle_11g/product/diag/rdbms/mydb/mydb/trace
core_dump_dest /oracle_11g/product/diag/rdbms/mydb/mydb/cdump
diagnostic_dest /oracle_11g/product
memory_max_target 369098752
memory_target 369098752
pga_aggregate_target 0
sga_max_size 369098752
sga_target 0
user_dump_dest /oracle_11g/product/diag/rdbms/mydb/mydb/trace

9 rows selected.

More information:

11g Automatic Diagnostic Repository (ADR)
REMOTE_OS_AUTHENT
SGA_MAX_SIZE
DIAGNOSTIC_DEST

Friday, June 3, 2011

Checking database character set

If you want to export an Oracle database to import the data in other database (imp/exp), or you want to create a new Oracle database similar to other database then you have to pay attention to the database character set and NLS_LANG environment variable. If you don't, in the case of an import/export you will end with data different from the source database, and in the case of creation of a database your application might not store data as expected, and both problems are hard to detect and difficult to solve. Therefore, spend a minute checking the NLS_CHARACTERSET in the source database and setting the NLS_LANG environment variable at the time of import or database creation, and you will not have to reexport or recreate a database for character set issues.

To check the NLS_CHARACTERSET and all localization parameters of your source database, you can query the NLS_DATABASE_PARAMETERS table:

SQL> select * from NLS_DATABASE_PARAMETERS;

PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET WE8ISO8859P1
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT dd-mon-yyyy
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 10.2.0.1.0

20 rows selected.

Having the NLS_LANGUAGE, NLS_TERRITORY and NLS_CHARACTERSET parameter values you can set environment variable NLS_LANG accordingly, or you can just set it in the target database the same way as the source database provided the value in NLS_LANG variable matches the NLS database parameter values:

oracle@myserver:~$ export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

And if you're creating a database, don't forget to declare the character set properly:

CREATE DATABASE mydb
...
CHARACTER SET WE8ISO8859P1
...

More information:

Oracle Database Globalization Support Guide