Monday, August 1, 2011

'du' and 'df' tools report different space utilization

Recently in my quater rack Exadata system (2 nodes, 11202 database on top of 5.5 OEL), /u01 had 0 free disk space.  After I identified a 65 GB trace file and removed it, "df -h" still show 0 free disk space, but when I did "du -sh /u01", it show only 24 GB disk space was used, and the /u01 has a capacity of 89 GB. After some research, I found this note: 'du' and 'df' tools report different space utilization [ID 457444.1].  By following the instruction in this note, I fixed this different space utilization report problem. Here is how:
1 run the lsof command (/usr/sbin/lsof | grep deleted) as root to identify the holding process:

[root@node02 scripts]# lsof | grep deleted
oracle    10373 oracle    4w      REG              253,2    10538879   13041668 /u01/app/11.2.0/grid/log/thordb02/agent/ohasd/oraagent_oracle/oraagent_oracle.l10 (deleted)
oracle    10375 oracle    4w      REG              253,2    10538879   13041668 /u01/app/11.2.0/grid/log/thordb02/agent/crsd/oraagent_oracle/oraagent_oracle.l10 (deleted)
oracle    11028 oracle    4w      REG              253,2    10532777    3817484 /u01/app/11.2.0/grid/log/thordb02/agent/crsd/oraagent_oracle/oraagent_oracle.l10 (deleted)
oracle    11028 oracle   38w      REG              253,2 69055365120    7344857 /u01/app/oracle/diag/rdbms/thor/thor2/trace/thor2_smon_11028.trc (deleted)

Note:

  • the 7th column in the output denotes the size of deleted files (in bytes). 
  • The 9th column denotes which file remains held open. 
  • The 1st and second columns denotes the process and pid that still holds the open file descriptor.
2 kill the "11028" process identified from the above
#kill -9 11028

3 redo "df -h" and "du -sh /u01" to see if they report same space utilization:
[root@node2 scripts]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VGExaDb-LVDbSys1
                       30G  5.2G   23G  19% /
/dev/sda1             124M   16M  102M  14% /boot
/dev/mapper/VGExaDb-LVDbOra1
                       99G   30G   65G  32% /u01
tmpfs                  81G  196M   81G   1% /dev/shm
[root@node2 scripts]# du -sh /u01
29G     /u01

Problem solved.