Monday, May 30, 2011

Compressed exports

Using RMAN to make backups of an Oracle database is very convenient, but sometimes you have to do exports of your database because RMAN is unsuitable; for example, if you need to copy or backup just a table, or you don't have archived logs activated due to storage constraints and you need to backup your database, and also you cannot shut down your instance while backing up, then exp is a good option.

But for some mysterious reason exp (at least up to Oracle 10g) does not have compression and you cannot redirect your backup to standard output so it might be difficult to backup large databases. Fortunately you can use named pipes to solve this situation; first you have to create a named pipe in order to connect the output of exp and the input of your compressor:

[oracle]$ mknod mypipe p
[oracle]$ ls -la
total 8
drwxrwx--- 2 oracle dba 4096 May 30 16:47 .
drwxrwx--- 7 oracle dba 4096 May 30 16:45 ..
prw-r----- 1 oracle dba 0 May 30 16:47 mypipe

Then you can launch your favorite compressor to work in background and finally execute the exp command, and you will have an export compressed on the fly:

[oracle]$ compress < mypipe > mydb.dmp.Z &
[oracle]$ exp / parfile=myparfile.txt log=mylog.txt file=mypipe

And remember, if you want to put this in a script add a wait sentence after the exp command.

Tuesday, May 24, 2011

PeopleSoft, Oracle and an unlucky DBA

If you're an Oracle DBA, chances are you have to deal with PeopleSoft databases. They're huge, complex databases that suddenly run out of temp space or increase many times their usual processing time, driving the Oracle DBA crazy in the process.

In short, in order to support other database engines PeopleSoft uses normal tables as temporary tables and tags them in the PSRECDEFN table as rectype=7, therefore you cannot just gather object statistics as usual because temporary tables grow a lot and get empty very fast. First of all, you should use the Appendix A script of PeopleSoft Enterprise Performance on Oracle 10g Database Red Paper, and if you read the whole document it will be worth the time.

This is what I know first-hand. The problem is that sometimes this special procedure of gathering statistics is not enough and you get the problems mentioned above anyway, but some fellow Oracle DBA after analyzing a lot of AWR reports, concluded that it helps gathering statistics of these objects:

DBMS_STATS.GATHER_TABLE_STATS (
OWNNAME => 'SYSADM',
TABNAME => 'PS_GP_ITER_TRGR',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
DEGREE => DBMS_STATS.AUTO_DEGREE,
CASCADE => TRUE);

DBMS_STATS.GATHER_TABLE_STATS (
OWNNAME => 'SYSADM',
TABNAME => 'PS_GP_RUNCTL',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
DEGREE => DBMS_STATS.AUTO_DEGREE,
CASCADE => TRUE);

DBMS_STATS.GATHER_TABLE_STATS (
OWNNAME => 'SYSADM',
TABNAME => 'PS_GP_CAL_RUN',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
DEGREE => DBMS_STATS.AUTO_DEGREE,
CASCADE => TRUE);

DBMS_STATS.GATHER_TABLE_STATS (
OWNNAME => 'SYSADM',
TABNAME => 'PS_GP_PYE_PRC_STAT',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
DEGREE => DBMS_STATS.AUTO_DEGREE,
CASCADE => TRUE);

DBMS_STATS.GATHER_TABLE_STATS (
OWNNAME => 'SYSADM',
TABNAME => 'PS_GP_HST_WRK',
estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE,
method_opt => 'FOR ALL COLUMNS SIZE AUTO',
DEGREE => DBMS_STATS.AUTO_DEGREE,
CASCADE => TRUE);

analyze index SYSADM.PS_GP_PYE_SEG_STAT compute statistics;
analyze index SYSADM.PSAGP_PYE_PRC_STAT compute statistics;
analyze index SYSADM.PS_GP_PYE_PRC_STAT compute statistics;

I don't know for sure why this procedure solves some performance and temp space issues, but it works for me and the Oracle DBA that wrote this procedure is very good so it might work for you as well.

We put this procedure as a daily job in one PeopleSoft database and in general works fine, but even so we get problems from time to time with that database; this leads me to think this procedure should be run before scheduling a big or complex PeopleSoft job because upon running this procedure there is no problem at all. I mean, if you get two or more PeopleSoft complex tasks in one day it would be better to run this procedure before running each PeopleSoft process.

Monday, May 23, 2011

Checking Data Block Corruption

If you think a table might has corrupted blocks then you can check it with the DBMS_REPAIR package. The checking procedure is very simple; first create a repair table (in your schema) if you don't have one, then check the table with the DBMS_REPAIR.CHECK_OBJECT procedure, and finally check the repair table looking for block corruption records. Optionally, you can drop the repair table if you won't need it anymore.


myserver> sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Fri May 20 08:29:27 2011

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'USERS');
END;
/

PL/SQL procedure successfully completed.

SQL> exit;

myserver> nohup sqlplus '/ as sysdba' @check.sql &
[1] 12263522
myserver> Sending nohup output to nohup.out.

myserver> tail -f nohup.out

SQL*Plus: Release 10.2.0.4.0 - Production on Fri May 20 08:29:27 2011

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

number corrupt: 0

PL/SQL procedure successfully completed.

SQL> exit;

myserver> cat check.sql
SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'MYSCHEMA',
OBJECT_NAME => 'MYTABLE',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/
exit;

As you can see, according to the procedure's message there are no corrupted blocks (number corrupt: 0), and if you created the repair table just before running the DBMS_REPAIR.CHECK_OBJECT
procedure then it will be empty as well. You might noticed I decided to create an SQL script (check.sql) and run it with nohup; if you have a very large table and a not so good connection to the database server, it might be a good idea to run the procedure with the nohup command.

myserver> sqlplus '/ as sysdba'

SQL*Plus: Release 10.2.0.4.0 - Production on Fri May 20 08:29:27 2011

Copyright (c) 1982, 2007, Oracle. All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select * from REPAIR_TABLE;

no rows selected

SQL> BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION => dbms_repair.drop_action);
END;
/

PL/SQL procedure successfully completed.

SQL> exit;

Adn if you ran out of luck and got a table with corrupted blocks, be sure to check the Oracle documentation and understand what means repairing a table with block corruption.

Wednesday, May 11, 2011

Oracle index usage

If you want to get basic usage statistics of indexes recently used in your Oracle instance, you can use this SQL sentence:

SQL> select p.object_owner, p.object_name, sum(t.disk_reads_total) as disk_reads_sum,
sum(t.rows_processed_total) as rows_processed_sum from dba_hist_sql_plan p, dba_hist_sqlstat t
where p.sql_id = t.sql_id and p.object_type like '%INDEX%' and p.object_owner not in
('SYS','SYSTEM','SYSMAN','DBSNMP','OUTLN','TSMSYS')
group by p.object_owner, p.object_name order by p.object_owner, 4 desc;

OBJECT_OWNER OBJECT_NAME DISK_READS_SUM ROWS_PROCESSED_SUM
-------------------- ------------------------------- -------------- ------------------
MYSCHEMA01 IDX_MYTABLE01 620641 6209
MYSCHEMA01 IDX_MYTABLE02 569879 4965
MYSCHEMA01 IDX_MYTABLE03_PK 20 4793
MYSCHEMA01 IDX_MYTABLE04_PK 20 4793
MYSCHEMA02 IDX_MYTABLE05 414609 19940082
MYSCHEMA02 IDX_MYTABLE06 559721 3776187
MYSCHEMA02 IDX_MYTABLE07 1165298 621298
MYSCHEMA02 IDX_MYTABLE08_PK 795 107678
MYSCHEMA02 IDX_MYTABLE09_PK 174079 78627

Tuesday, May 10, 2011

Oracle outstanding alerts

Since Oracle 10g you can configure alerts for database events like tablespace size (and a lot more), and by default the instance issues alerts about tablespace size, therefore you can check all the alerts and also database block corruption with this sentences:

SQL> column REASON format A35
SQL> column SUGGESTED_ACTION format A35
SQL> column TIME_SUGGESTED format A35

SQL> select REASON, SUGGESTED_ACTION, TIME_SUGGESTED from dba_alert_history union
select REASON, SUGGESTED_ACTION, TIME_SUGGESTED from dba_outstanding_alerts
order by TIME_SUGGESTED;

REASON SUGGESTED_ACTION TIME_SUGGESTED
----------------------------------- ----------------------------------- -----------------------------------
Tablespace [UNDO] is [76 percent] f Add space to the tablespace 07-MAY-11 01.47.40.716220 AM -05:00
ull

Tablespace [UNDO] is [73 percent] f Add space to the tablespace 07-MAY-11 03.07.46.785262 AM -05:00
ull

SQL> SELECT * FROM V$DATABASE_BLOCK_CORRUPTION;

no rows selected

Needless to say, if you have data in the V$DATABASE_BLOCK_CORRUPTION view you might have serious problems with your database.

Monday, May 9, 2011

Oracle instances dying and read-only filesystems

You have a Linux server running one or more Oracle instances, and some day all the instances are down except their listeners. You check the alert log and found absolutely nothing, and if you're lucky enough you try to start up instances and get an error message about a read-only filesystem.

myserver> dbstart
ORACLE_HOME_LISTNER is not SET, unable to auto-start Oracle Net Listener
Usage: /oracle/product/10.2.0/bin/dbstart ORACLE_HOME
touch: cannot touch `/oracle/product/10.2.0/startup.log': Read-only file system
chmod: changing permissions of `/oracle/product/10.2.0/startup.log': Read-only file system
Processing Database instance "mydb01": log file /oracle/product/10.2.0/startup.log
/oracle/product/10.2.0/bin/dbstart: line 361: /oracle/product/10.2.0/startup.log: Read-only file system
touch: cannot touch `/oracle/product/10.2.0/startup.log': Read-only file system
chmod: changing permissions of `/oracle/product/10.2.0/startup.log': Read-only file system
Processing Database instance "mydb02": log file /oracle/product/10.2.0/startup.log
/oracle/product/10.2.0/bin/dbstart: line 361: /oracle/product/10.2.0/startup.log: Read-only file system

myserver> touch /oracle/product/10.2.0/test.txt
touch: cannot touch `/oracle/product/10.2.0/test.txt': Read-only file system

You try to create an archive with touch in the Oracle's filesystem and you can't, so you check server up time, load and mounted filesystems:

myserver> uptime
11:34:30 up 138 days, 18:02, 1 user, load average: 0.00, 0.00, 0.00

myserver> mount
/dev/mapper/VolGroup01-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/cciss/c0d0p1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
/dev/mapper/appsvg-lv001 on /oracle type ext3 (rw,_netdev)

Everything seems fine but you notice the option _netdev in the Oracle filesystem. When you're configuring filesystems in /etc/fstab and some of them are network-dependent you put this option so the operating system won't try to mount them before having network connectivity.

Therefore, you have a filesystem that is accessed by network so it might be with iSCSI:

myserver> /sbin/lsmod|grep iscsi
iscsi_tcp 19785 3
libiscsi_tcp 21957 2 iscsi_tcp,cxgb3i
libiscsi2 42181 5 ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi_tcp
scsi_transport_iscsi2 37709 7 ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2
scsi_transport_iscsi 6085 1 scsi_transport_iscsi2
scsi_mod 141717 23 mptctl,ib_iser,iscsi_tcp,bnx2i,cxgb3i,libiscsi2,scsi_transport_iscsi2,scsi_dh,sg,qla2xxx,scsi_transport_fc,mptspi,scsi_transport_spi,mptsas,mptscsih,scsi_transport_sas,usb_storage,cciss,hpahcisr,sd_mod

myserver> /sbin/iscsiadm -m session -P 2
iscsiadm: Maybe you are not root?
iscsiadm: Could not lock discovery DB: /var/lock/iscsi/lock.write: Permission denied
Target: iqn.2000-03.com.someprovider:mycompany:87:mycmp1
Current Portal: 10.0.57.23:3260,1
Persistent Portal: 10.0.57.34:3260,1
**********
Interface:
**********
Iface Name: default
Iface Transport: tcp
Iface Initiatorname: iqn.1994-05.com.redhat:1234abc56d78
Iface IPaddress: 10.0.57.85
Iface HWaddress:
Iface Netdev:
SID: 1
iSCSI Connection State: Unknown
iSCSI Session State: LOGGED_IN
Internal iscsid Session State: Unknown
************************
Negotiated iSCSI params:
************************
HeaderDigest: None
DataDigest: None
MaxRecvDataSegmentLength: 262144
MaxXmitDataSegmentLength: 262144
FirstBurstLength: 262144
MaxBurstLength: 1048576
ImmediateData: Yes
InitialR2T: No
MaxOutstandingR2T: 1

That's the problem! Oracle instances die because cannot write database files, and if the logs are placed in the same filesystems Oracle cannot write error messages either. This connectivity problem would be by heavy load, network glitches, problems with the disk appliance or something else, but if the problem is not so bad your sysadmin might change some iSCSI parameters to help a bit:

node.conn[0].timeo.noop_out_interval = 0
node.conn[0].timeo.noop_out_timeout = 0
node.session.timeo.replacement_timeout = 86400

More information:

Filesystems becoming read-only on iSCSI
Linux* Open-iSCSI

Friday, May 6, 2011

Checking Dataguard instances

If you have Oracle databases configured with Dataguard, you may want to be sure everything is going fine and archives are transmitted to and loaded in your standby database. You can do this easily executing this script in your primary instance:

COLUMN ARCHIVE_NAME FORMAT A35
COLUMN TRANSMITTED FORMAT A18
COLUMN STDBY_APPLIED FORMAT A10
COLUMN PRI_ARCH, FORMAT A10
COLUMN STDBY_DEST FORMAT A10

select case when remote.sequence# is null
then 'NOT TRANSMITTED!'
else 'transmitted'
end as TRANSMITTED,
remote.applied as STDBY_APPLIED,
local.sequence# PRI_SEQ#,
local.archived PRI_ARCH,
remote.standby_dest STDBY_DEST,
remote.sequence# STDBY_SEQ#,
current_seq#, status_db
from (
(select *
from (
select v$instance.status as status_db,instance_role,
sequence# current_seq#, (sequence#-11) secuencia
from v$log, v$instance
where v$log.status = 'CURRENT'
), v$archived_log l
where dest_id = 1
and l.sequence# > secuencia) local
left join
(select * from v$archived_log where dest_id = 2) remote
on local.sequence# = remote.sequence# and
local.thread# = remote.thread#
)
order by local.sequence# desc;

The meaning of the fields is:

TRANSMITTED if the archive log was transferred to the standby instance
STDBY_APPLIED if the archive log was applied to the standby database
PRI_SEQ# primary's log sequence number
PRI_ARCH if the log was archived
STDBY_SEQ# standby's log sequence number
CURRENT_SEQ# current sequence number
STATUS_DB status of the primary's instance

As you can see in this example, the standby database is up to date and everything is fine:

TRANSMITTED STDBY_APPL PRI_SEQ# PRI STDBY_DEST STDBY_SEQ# CURRENT_SEQ# STATUS_DB
------------------ ---------- ---------- --- ---------- ---------- ------------ ------------
transmitted YES 788 YES YES 788 789 OPEN
transmitted YES 787 YES YES 787 789 OPEN
transmitted YES 786 YES YES 786 789 OPEN
transmitted YES 785 YES YES 785 789 OPEN
transmitted YES 784 YES YES 784 789 OPEN
transmitted YES 783 YES YES 783 789 OPEN
transmitted YES 782 YES YES 782 789 OPEN
transmitted YES 781 YES YES 781 789 OPEN
transmitted YES 780 YES YES 780 789 OPEN
transmitted YES 779 YES YES 779 789 OPEN

10 rows selected.

In this example, all the archive logs have been transferred to the standby database but not applied, in this case because the standby database is open in read-only mode:

TRANSMITTED STDBY_APPL PRI_SEQ# PRI STDBY_DEST STDBY_SEQ# CURRENT_SEQ# STATUS_DB
------------------ ---------- ---------- --- ---------- ---------- ------------ ------------
transmitted NO 387 YES YES 387 388 OPEN
transmitted NO 386 YES YES 386 388 OPEN
transmitted NO 385 YES YES 385 388 OPEN
transmitted NO 384 YES YES 384 388 OPEN
transmitted NO 383 YES YES 383 388 OPEN
transmitted NO 382 YES YES 382 388 OPEN
transmitted NO 381 YES YES 381 388 OPEN
transmitted NO 380 YES YES 380 388 OPEN
transmitted NO 379 YES YES 379 388 OPEN
transmitted NO 378 YES YES 378 388 OPEN

10 rows selected.

This script works as long as your LOG_ARCHIVE_DEST_2 parameter points to the standby database.

And just for the record not everything in this blog is my idea, for example this script was the idea of a fellow DBA. Sometimes I get ideas from the Internet, sometimes from my coworkers, and believe it or not, sometimes I have my own ideas!

Thursday, May 5, 2011

expdp backup failing and master table

You are trying to do a backup of your Oracle database with Data Pump, and expdp ends almost instantly with this error message:

ORA-31626: job does not exist
ORA-31633: unable to create master table "MYUSER.EXPDP_MYDB_BCK"
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 871
ORA-00955: name is already used by an existing object

If you didn't write the script that runs Data Pump and have little experience with expdp you might don't know what this error means, but is pretty simple: the master table (in this case MYUSER.EXPDP_MYDB_BCK) or other object with this name exist so Data Pump cannot create it, and is needed in order to do the backup. If may exist because expdp didn't finish properly the last time it was ran so the master table was not deleted.

The solution is pretty simple too: if you don't need this table just drop it, or change the master table name in Data Pump parameters if you need this object and is not related to this Data Pump backup.

More information:

Oracle Data Pump in Oracle Database 10g

Wednesday, May 4, 2011

Dataguard and dying archive processes

Some time ago you successfully configured a pair of instances with Dataguard and since then everything worked fine, until you opened in read-only mode your standby instance and then the primary's archive processes started dying and you started getting this messages in your primary's alert log:


Thu Mar 17 15:32:49 2011
******************************************************************
LGWR: Setting 'active' archival for destination LOG_ARCHIVE_DEST_2
******************************************************************
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_lns1_17449.trc (incident=356411):
ORA-00600: internal error code, arguments: [17113], [0x000000000], [], [], [], [], [], [], [], [], [], []
Thu Mar 17 15:32:53 2011
Sweep Incident[356411]: completed
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_lns1_17449.trc (incident=356412):
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_lns1_17449.trc:
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_lns1_17449.trc:
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_lns1_17449.trc:
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_lns1_17449.trc:
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []
Thu Mar 17 15:32:57 2011
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_arc0_17142.trc (incident=361859):
ORA-00600: internal error code, arguments: [17113], [0x000000000], [], [], [], [], [], [], [], [], [], []
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_arc0_17142.trc (incident=361860):
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_arc0_17142.trc:
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []
Errors in file /oracle_11g/product/diag/rdbms/mydb/mydb/trace/mydb_arc0_17142.trc:
ORA-00600: internal error code, arguments: [], [], [], [], [], [], [], [], [], [], [], []

As you might know, an ORA-00600 error means something like "I have no idea what happened so I'll throw an ORA-00600 error". This kind of situations are very hard to diagnose, but after some hard working (of my DBA coworker) we realized that it was a problem of lack of permissions in the /var/tmp directory, where Oracle puts sockets for the listener.

The directories and permissions of /var/tmp should be something like this:

myserver> ls -la /var/tmp
total 12
drwxrwxrwt 3 root root 4096 2011-03-18 10:10 .
drwxr-xr-x 16 root root 4096 2010-02-05 13:14 ..
drwxrwxrwt 2 root dba 4096 2011-03-18 08:13 .oracle
myserver> ls -la /var/tmp/.oracle/
total 8
drwxrwxrwt. 2 root dba 4096 2011-05-04 08:17 .
drwxrwxrwt. 3 root root 4096 2011-05-04 09:18 ..
srwxrwxrwx. 1 oracle dba 0 2010-12-17 11:47 s#6115.1
srwxrwxrwx. 1 oracle dba 0 2010-12-17 11:47 s#6115.2
srwxrwxrwx. 1 oracle dba 0 2011-01-12 16:48 s#7018.1
srwxrwxrwx. 1 oracle dba 0 2011-01-12 16:48 s#7018.2
srwxrwxrwx. 1 oracle dba 0 2010-05-12 12:31 s#7662.1
srwxrwxrwx. 1 oracle dba 0 2010-05-12 12:31 s#7662.2
srwxrwxrwx. 1 oracle dba 0 2010-05-11 15:58 sEXTPROC_FOR_XE
srwxrwxrwx 1 oracle dba 0 2011-05-04 08:17 smyserverDBG_CSSD
srwxrwxrwx 1 oracle dba 0 2011-05-04 08:17 sOCSSD_LL_myserver_localhost
srwxrwxrwx 1 oracle dba 0 2011-05-04 08:17 sOracle_CSS_LclLstnr_localhost_0

Therefore, after fixing permissions on /var/tmp and restarting the primary's instance listener and archive log sending, the problem disappeared.

Tuesday, May 3, 2011

Getting AIX basic system info

Sometimes you might need to know basic information about your AIX server like time up (uptime), operating system version (oslevel), firmware version (lsmcode), operating system parameters (lsattr), hardware (lscfg) or even storage errors (errpt), therefore I put this transcript as a reference. Some parameters are easy to understand and others are just for experienced sysadmins, but this information might be helpful even for an Oracle DBA working with an AIX server.

myserver> uptime
04:34PM up 79 days, 5:52, 1 user, load average: 1.20, 1.13, 1.11

myserver> oslevel -r
5300-11

myserver> lsmcode -c
The current permanent system firmware image is SF240_320
The current temporary system firmware image is SF240_332
The system is currently booted from the temporary firmware image.

myserver> lsattr -E -l sys0
SW_dist_intr false Enable SW distribution of interrupts True
autorestart true Automatically REBOOT system after a crash True
boottype disk N/A False
capacity_inc 1.00 Processor capacity increment False
capped true Partition is capped False
conslogin enable System Console Login False
cpuguard enable CPU Guard True
dedicated true Partition is dedicated False
ent_capacity 2.00 Entitled processor capacity False
frequency 1056000000 System Bus Frequency False
fullcore false Enable full CORE dump True
fwversion IBM,SF240_320 Firmware version and revision levels False
id_to_partition 0X80000819C3F00009 Partition ID False
id_to_system 0X80000819C3F00000 System ID False
iostat false Continuously maintain DISK I/O history True
keylock normal State of system keylock at boot time False
log_pg_dealloc true Log predictive memory page deallocation events True
max_capacity 3.00 Maximum potential processor capacity False
max_logname 9 Maximum login name length at boot time True
maxbuf 20 Maximum number of pages in block I/O BUFFER CACHE True
maxmbuf 0 Maximum Kbytes of real memory allowed for MBUFS True
maxpout 0 HIGH water mark for pending write I/Os per file True
maxuproc 1024 Maximum number of PROCESSES allowed per user True
min_capacity 2.00 Minimum potential processor capacity False
minpout 0 LOW water mark for pending write I/Os per file True
modelname IBM,9119-590 Machine name False
ncargs 512 ARG/ENV list size in 4K byte blocks True
nfs4_acl_compat secure NFS4 ACL Compatibility Mode True
pre430core false Use pre-430 style CORE dump True
pre520tune disable Pre-520 tuning compatibility mode True
realmem 18874368 Amount of usable physical memory in Kbytes False
rtasversion 1 Open Firmware RTAS version False
sed_config select Stack Execution Disable (SED) Mode True
systemid IBM,02025224F Hardware system identifier False
variable_weight 0 Variable processor capacity weight False

myserver> lscfg
INSTALLED RESOURCE LIST

The following resources are installed on the machine.
+/- = Added or deleted from Resource List.
* = Diagnostic support not available.

Model Architecture: chrp
Model Implementation: Multiple Processor, PCI bus

+ sys0 System Object
+ sysplanar0 System Planar
* pci45 U5791.001.9920F70-P1 PCI Bus
* pci46 U5791.001.9920F70-P1 PCI Bus
+ fcs2 U5791.001.9920F70-P1-C08-T1 FC Adapter
* fcnet2 U5791.001.9920F70-P1-C08-T1 Fibre Channel Network Protocol Device
+ fscsi2 U5791.001.9920F70-P1-C08-T1 FC SCSI I/O Controller Protocol Device
* hdisk5 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LF0000000000000 EMC Symmetrix FCP Raid1
* hdiskpower0 U5791.001.9920F70-P1-C08-T1-L40 PowerPath Device
* hdiskpower1 U5791.001.9920F70-P1-C08-T1-L41 PowerPath Device
* hdiskpower2 U5791.001.9920F70-P1-C08-T1-L42 PowerPath Device
* hdiskpower3 U5791.001.9920F70-P1-C08-T1-L43 PowerPath Device
* hdiskpower4 U5791.001.9920F70-P1-C08-T1-L44 PowerPath Device
* hdiskpower5 U5791.001.9920F70-P1-C08-T1-L45 PowerPath Device
* hdiskpower6 U5791.001.9920F70-P1-C08-T1-L46 PowerPath Device
* hdiskpower7 U5791.001.9920F70-P1-C08-T1-L47 PowerPath Device
* hdiskpower8 U5791.001.9920F70-P1-C08-T1-L48 PowerPath Device
* hdiskpower9 U5791.001.9920F70-P1-C08-T1-L49 PowerPath Device
* hdiskpower10 U5791.001.9920F70-P1-C08-T1-L50 PowerPath Device
* hdiskpower11 U5791.001.9920F70-P1-C08-T1-L51 PowerPath Device
* hdiskpower12 U5791.001.9920F70-P1-C08-T1-L52 PowerPath Device
* hdiskpower13 U5791.001.9920F70-P1-C08-T1-L53 PowerPath Device
* hdiskpower14 U5791.001.9920F70-P1-C08-T1-L54 PowerPath Device
* hdiskpower15 U5791.001.9920F70-P1-C08-T1-L55 PowerPath Device
* hdiskpower16 U5791.001.9920F70-P1-C08-T1-L56 PowerPath Device
* hdiskpower17 U5791.001.9920F70-P1-C08-T1-L57 PowerPath Device
* hdiskpower18 U5791.001.9920F70-P1-C08-T1-L58 PowerPath Device
* hdiskpower19 U5791.001.9920F70-P1-C08-T1-L60 PowerPath Device
* hdiskpower20 U5791.001.9920F70-P1-C08-T1-L61 PowerPath Device
* hdiskpower21 U5791.001.9920F70-P1-C08-T1-L62 PowerPath Device
* hdiskpower22 U5791.001.9920F70-P1-C08-T1-L63 PowerPath Device
* hdiskpower23 U5791.001.9920F70-P1-C08-T1-L64 PowerPath Device
* hdiskpower24 U5791.001.9920F70-P1-C08-T1-L65 PowerPath Device
* hdiskpower25 U5791.001.9920F70-P1-C08-T1-L66 PowerPath Device
* hdiskpower26 U5791.001.9920F70-P1-C08-T1-L59 PowerPath Device
* hdisk67 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L11000000000000 EMC Symmetrix FCP Raid5
* hdisk68 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L12000000000000 EMC Symmetrix FCP Raid5
* hdisk69 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L13000000000000 EMC Symmetrix FCP Raid5
* hdisk70 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L14000000000000 EMC Symmetrix FCP Raid5
* hdisk71 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L15000000000000 EMC Symmetrix FCP Raid5
* hdisk72 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L16000000000000 EMC Symmetrix FCP Raid5
* hdisk73 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L17000000000000 EMC Symmetrix FCP Raid5
* hdisk74 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L18000000000000 EMC Symmetrix FCP Raid5
* hdisk75 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L10A000000000000 EMC Symmetrix FCP Raid5
* hdisk78 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LF5000000000000 EMC Symmetrix FCP Raid5
* hdisk79 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LF6000000000000 EMC Symmetrix FCP Raid5
* hdisk107 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L2000000000000 EMC Symmetrix FCP Raid5
* hdisk108 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L3000000000000 EMC Symmetrix FCP Raid5
* hdisk109 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L4000000000000 EMC Symmetrix FCP Raid5
* hdisk110 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L5000000000000 EMC Symmetrix FCP Raid5
* hdisk111 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L6000000000000 EMC Symmetrix FCP Raid5
* hdisk112 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L7000000000000 EMC Symmetrix FCP Raid5
* hdisk113 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L8000000000000 EMC Symmetrix FCP Raid5
* hdisk114 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L9000000000000 EMC Symmetrix FCP Raid5
* hdisk115 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LA000000000000 EMC Symmetrix FCP Raid5
* hdisk116 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LB000000000000 EMC Symmetrix FCP Raid5
* hdisk117 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LC000000000000 EMC Symmetrix FCP Raid5
* hdisk118 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LD000000000000 EMC Symmetrix FCP Raid5
* hdisk119 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LE000000000000 EMC Symmetrix FCP Raid5
* hdisk120 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-LF000000000000 EMC Symmetrix FCP Raid5
* hdisk121 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L10000000000000 EMC Symmetrix FCP Raid5
* hdisk122 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L100000000000000 EMC Symmetrix FCP Raid5
* hdisk123 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L101000000000000 EMC Symmetrix FCP Raid5
* hdisk124 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L102000000000000 EMC Symmetrix FCP Raid5
* hdisk125 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L103000000000000 EMC Symmetrix FCP Raid5
* hdisk126 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L104000000000000 EMC Symmetrix FCP Raid5
* hdisk127 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L105000000000000 EMC Symmetrix FCP Raid5
* hdisk128 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L106000000000000 EMC Symmetrix FCP Raid5
* hdisk129 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L107000000000000 EMC Symmetrix FCP Raid5
* hdisk130 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L109000000000000 EMC Symmetrix FCP Raid5
* hdisk131 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L110000000000000 EMC Symmetrix FCP Raid5
* hdisk132 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L111000000000000 EMC Symmetrix FCP Raid5
* hdisk133 U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L112000000000000 EMC Symmetrix FCP Raid5
* hdiskpower27 U5791.001.9920F70-P1-C08-T1-L29 PowerPath Device
* hdiskpower28 U5791.001.9920F70-P1-C08-T1-L30 PowerPath Device
* hdiskpower29 U5791.001.9920F70-P1-C08-T1-L31 PowerPath Device
* hdiskpower30 U5791.001.9920F70-P1-C08-T1-L32 PowerPath Device
* hdiskpower31 U5791.001.9920F70-P1-C08-T1-L33 PowerPath Device
* hdiskpower32 U5791.001.9920F70-P1-C08-T1-L34 PowerPath Device
* hdiskpower33 U5791.001.9920F70-P1-C08-T1-L35 PowerPath Device
* hdiskpower34 U5791.001.9920F70-P1-C08-T1-L36 PowerPath Device
* hdiskpower35 U5791.001.9920F70-P1-C08-T1-L37 PowerPath Device
* hdiskpower36 U5791.001.9920F70-P1-C08-T1-L38 PowerPath Device
* hdiskpower37 U5791.001.9920F70-P1-C08-T1-L39 PowerPath Device
* pci15 U5791.001.9920F6T-P2 PCI Bus
* pci25 U5791.001.9920F6T-P2 PCI Bus
+ scsi12 U5791.001.9920F6T-P2-T6 Wide/Ultra-3 SCSI I/O Controller
+ hdisk18 U5791.001.9920F6T-P2-T6-L8-L0 16 Bit LVD SCSI Disk Drive (73400 MB)
+ hdisk19 U5791.001.9920F6T-P2-T6-L9-L0 16 Bit LVD SCSI Disk Drive (73400 MB)
+ ses10 U5791.001.9920F6T-P2-T6-L15-L0 SCSI Enclosure Services Device
* pci4 U5791.001.9920F6H-P1 PCI Bus
* pci5 U5791.001.9920F6H-P1 PCI Bus
+ ent0 U5791.001.9920F6H-P1-C09-T1 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
+ ent1 U5791.001.9920F6H-P1-C09-T2 2-Port 10/100/1000 Base-TX PCI-X Adapter (14108902)
* vio0 Virtual I/O Bus
* ent3 U9119.590.025224F-V9-C2-T1 Virtual I/O Ethernet Adapter (l-lan)
* vsa0 U9119.590.025224F-V9-C0 LPAR Virtual Serial Adapter
* vty0 U9119.590.025224F-V9-C0-L0 Asynchronous Terminal
* pci0 U5791.001.9920F6W-P1 PCI Bus
* pci44 U5791.001.9920F6W-P1 PCI Bus
+ fcs0 U5791.001.9920F6W-P1-C09-T1 FC Adapter
* fcnet0 U5791.001.9920F6W-P1-C09-T1 Fibre Channel Network Protocol Device
+ fscsi0 U5791.001.9920F6W-P1-C09-T1 FC SCSI I/O Controller Protocol Device
* hdisk2 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LF0000000000000 EMC Symmetrix FCP Raid1
* hdisk58 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L11000000000000 EMC Symmetrix FCP Raid5
* hdisk59 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L12000000000000 EMC Symmetrix FCP Raid5
* hdisk60 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L13000000000000 EMC Symmetrix FCP Raid5
* hdisk61 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L14000000000000 EMC Symmetrix FCP Raid5
* hdisk62 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L15000000000000 EMC Symmetrix FCP Raid5
* hdisk63 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L16000000000000 EMC Symmetrix FCP Raid5
* hdisk64 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L17000000000000 EMC Symmetrix FCP Raid5
* hdisk65 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L18000000000000 EMC Symmetrix FCP Raid5
* hdisk66 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L10A000000000000 EMC Symmetrix FCP Raid5
* hdisk76 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LF5000000000000 EMC Symmetrix FCP Raid5
* hdisk77 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LF6000000000000 EMC Symmetrix FCP Raid5
* hdisk80 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L2000000000000 EMC Symmetrix FCP Raid5
* hdisk81 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L3000000000000 EMC Symmetrix FCP Raid5
* hdisk82 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L4000000000000 EMC Symmetrix FCP Raid5
* hdisk83 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L5000000000000 EMC Symmetrix FCP Raid5
* hdisk84 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L6000000000000 EMC Symmetrix FCP Raid5
* hdisk85 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L7000000000000 EMC Symmetrix FCP Raid5
* hdisk86 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L8000000000000 EMC Symmetrix FCP Raid5
* hdisk87 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L9000000000000 EMC Symmetrix FCP Raid5
* hdisk88 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LA000000000000 EMC Symmetrix FCP Raid5
* hdisk89 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LB000000000000 EMC Symmetrix FCP Raid5
* hdisk90 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LC000000000000 EMC Symmetrix FCP Raid5
* hdisk91 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LD000000000000 EMC Symmetrix FCP Raid5
* hdisk92 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LE000000000000 EMC Symmetrix FCP Raid5
* hdisk93 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-LF000000000000 EMC Symmetrix FCP Raid5
* hdisk94 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L10000000000000 EMC Symmetrix FCP Raid5
* hdisk95 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L100000000000000 EMC Symmetrix FCP Raid5
* hdisk96 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L101000000000000 EMC Symmetrix FCP Raid5
* hdisk97 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L102000000000000 EMC Symmetrix FCP Raid5
* hdisk98 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L103000000000000 EMC Symmetrix FCP Raid5
* hdisk99 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L105000000000000 EMC Symmetrix FCP Raid5
* hdisk100 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L106000000000000 EMC Symmetrix FCP Raid5
* hdisk101 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L107000000000000 EMC Symmetrix FCP Raid5
* hdisk102 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L109000000000000 EMC Symmetrix FCP Raid5
* hdisk103 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L110000000000000 EMC Symmetrix FCP Raid5
* hdisk104 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L111000000000000 EMC Symmetrix FCP Raid5
* hdisk105 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L112000000000000 EMC Symmetrix FCP Raid5
* hdisk106 U5791.001.9920F6W-P1-C09-T1-W5006048ACC36E4EC-L113000000000000 EMC Symmetrix FCP Raid5
+ fcs1 U5791.001.9920F6W-P1-C09-T2 FC Adapter
* fcnet1 U5791.001.9920F6W-P1-C09-T2 Fibre Channel Network Protocol Device
+ fscsi1 U5791.001.9920F6W-P1-C09-T2 FC SCSI I/O Controller Protocol Device
* rmt1 U5791.001.9920F6W-P1-C09-T2-W100000E00223B4FE-L1000000000000 Other FC SCSI Tape Drive
* rmt2 U5791.001.9920F6W-P1-C09-T2-W100000E00223B4FE-L2000000000000 Other FC SCSI Tape Drive
* rmt3 U5791.001.9920F6W-P1-C09-T2-W100000E00223D755-L0 Other FC SCSI Tape Drive
* rmt4 U5791.001.9920F6W-P1-C09-T2-W100000E00223D755-L1000000000000 Other FC SCSI Tape Drive
* rmt5 U5791.001.9920F6W-P1-C09-T2-W100000E00223FF7A-L0 Other FC SCSI Tape Drive
* rmt6 U5791.001.9920F6W-P1-C09-T2-W100000E002240263-L0 Other FC SCSI Tape Drive
* rmt0 U5791.001.9920F6W-P1-C09-T2-W100000E00223FF7A-L1000000000000 Other FC SCSI Tape Drive
* rmt7 U5791.001.9920F6W-P1-C09-T2-W100000E002240263-L1000000000000 Other FC SCSI Tape Drive
+ L2cache0 L2 Cache
+ mem0 Memory
+ proc0 Processor
+ proc2 Processor

myserver> errpt -a|more
---------------------------------------------------------------------------
LABEL: SC_DISK_ERR2
IDENTIFIER: B6267342

Date/Time: Mon Dec 13 06:53:34 CST 2010
Sequence Number: 000007
Machine Id: 001122334400
Node Id: myserver
Class: H
Type: PERM
Resource Name: hdisk111
Resource Class: disk
Resource Type: SYMM_RAID5
Location: U5791.001.9920F70-P1-C08-T1-W5006048ACC36E4E3-L6000000000000
VPD:
Manufacturer................EMC
Machine Type and Model......SYMMETRIX
ROS Level and ID............5671
Serial Number...............12345678
Part Number.................000000000000510023010187
EC Level....................751315
LIC Node VPD................0AA3
Device Specific.(Z0)........04
Device Specific.(Z1)........51
Device Specific.(Z2)........567100750000000000092508
Device Specific.(Z3)........12000000
Device Specific.(Z4)........54130008
Device Specific.(Z5)........BF80
Device Specific.(Z6)........4D

Description
DISK OPERATION ERROR

Probable Causes
DASD DEVICE

Failure Causes
DISK DRIVE
DISK DRIVE ELECTRONICS

Recommended Actions
PERFORM PROBLEM DETERMINATION PROCEDURES

Detail Data
PATH ID
0
SENSE DATA
0600 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0118 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000
0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0083 0000
0000 003D 0019
---------------------------------------------------------------------------
LABEL: SC_DISK_ERR2
IDENTIFIER: B6267342

Date/Time: Mon Dec 13 06:53:34 CST 2010
Sequence Number: 000006
Machine Id: 001122334400