Linux df Command Usage Examples

This post about Linux df command opens series of articles for Linux newbies where you’ll find description and usage examples of major Linux commands like df, top, fsck, mount and so on.

Introduction

Linux df command can be used to display disk usage statistics for the file systems present on the Linux system. It’s handy tool to know which filesystem is consuming how much memory. Also, if a particular filename is picked up and supplied as argument to df command then it displays the disk usage statistics for the file system on which the file resides. This command can be used by the system administrators to know the disk usage status of various file systems on Linux so that proper clean-up and maintenance of the Linux system can be performed. The df command provides various options through which the output can be customized in a way that is most suited to the user.

In this article, we will discuss the df command through practical examples.

Syntax

Before jumping on to the examples, lets first take a look on how to use the df command. Here is the syntax information of df command from the man page:

df [OPTION]... [FILE]...

So we see that the df command does not require any mandatory argument. The OPTION and FILE arguments are non-mandatory. While the OPTION argument tells the df command to act in a way as specified by the definition of that OPTION, the FILE argument tells the df command to print disk usage of only that file system on which the FILE resides.

NOTE: for those who are new to this type of syntax information, any argument specified in square brackets [] are non-mandatory.

Examples

1. Basic example

Here is how the df command can be used in its most basic form.

# df 
Filesystem     1K-blocks    Used     Available Use% Mounted on 
/dev/sda6       29640780 4320704     23814388  16%     / 
udev             1536756       4     1536752    1%     /dev 
tmpfs             617620     888     616732     1%     /run 
none                5120       0     5120       0%     /run/lock 
none             1544044     156     1543888    1%     /run/shm

In the output above, the disk usage statistics of all the file systems were displayed when the df command was run without any argument.

The first column specifies the file system name, the second column specifies the total memory for a particular file system in units of 1k-blocks where 1k is 1024 bytes. Used and available columns specify the amount of memory that is in use and is free respectively. The use column specifies the used memory in percentage while the final column ‘Mounted on’ specifies the mount point of a file system.

2. Get the disk usage of file system through a file

As already discussed in the introduction, df can display the disk usage information of a file system if any file residing on that file system is supplied as an argument to it.

Here is an example:

# df test 
Filesystem     1K-blocks    Used      Available Use% Mounted on 
/dev/sda6       29640780    4320600   23814492  16%       /

Here is another example:

# df groff.txt 
Filesystem     1K-blocks    Used     Available Use% Mounted on 
/dev/sda6       29640780    4320600  23814492  16%     /

We used two different files (residing on same file system) as argument to df command. The output confirms that the df command displays the disk usage of file system on which a file resides.

3. Display inode information

There exists an option -i through which the output of the df command displays the inode information instead of block usage.

For example:

# df -i
Filesystem      Inodes    IUsed    IFree     IUse% Mounted on
/dev/sda6      1884160    261964   1622196   14%        /
udev           212748     560      212188    1%         /dev
tmpfs          216392     477      215915    1%         /run
none           216392     3        216389    1%         /run/lock
none           216392     8        216384    1%         /run/shm

As we can see in the output above, the inode related information was displayed for each filesystem.

4. Produce a grand total

There exists an option –total through which the output displays an additional row at the end of the output which produces a total for every column.

Here is an example:

# df --total 
Filesystem     1K-blocks    Used    Available Use% Mounted on 
/dev/sda6       29640780 4320720    23814372  16%     / 
udev             1536756       4    1536752   1%      /dev 
tmpfs             617620     892    616728    1%      /run 
none                5120       0    5120      0%      /run/lock 
none             1544044     156    1543888   1%      /run/shm 
total           33344320 4321772    27516860  14%

So we see that the output contains an extra row towards the end of the output and displays total for each column.

5. Produce output in human readable format

There exists an option -h through which the output of df command can be produced in a human readable format.

Here is an example:

# df -h 
Filesystem      Size  Used   Avail Use% Mounted on 
/dev/sda6       29G   4.2G   23G   16%     /  
udev            1.5G  4.0K   1.5G   1%     /dev 
tmpfs           604M  892K   603M   1%     /run 
none            5.0M     0   5.0M   0%     /run/lock 
none            1.5G  156K   1.5G   1%     /run/shm

So we can see that the output displays the figures in form of ‘G’ (gigabytes), ‘M’ (megabytes) and ‘K’ (kilobytes). This makes the output easy to read and comprehend and thus makes is human readable. Note that the name of the second column is also changed to ‘size’ in order to make it human readable.

Related Links

Manual for df
Index of Linux commands

Track file changes using auditd

Most of Linux distributions comes with Linux Auditing System that makes it possible to track file changes, file accesses as well as system calls. It’s pretty useful functionality for sysadmins who wish to know who and when accessed and/or changed sensitive files like /etc/passwd, /etc/sudoers or others.

Daemon auditd that usually runs in background and starts after reboot by default logs those events into /var/log/audit.log file (or into other file if different syslog facility is specified). The common usage is to list all files which should be watched and search auditd’s logs from time to time. For example, I prefer to track any file changes into /etc/passwd, reading/writing of /etc/sudoers, executing of /bin/some/binary or just everything (read, write, attributes changes, executing) for my /very/important/file.

In order to configure that you’ll need two commands: auditctl and ausearch. First one is for configuring auditd daemon (e.g. setting a watch on a file), second one is for searching auditd logs (it’s possible to use grep against /var/log/audit.log too but ausearch command makes this task easier).

Install and start Linux Auditing System

If it happened that auditd daemon isn’t installed in your system then you can fix this by one of below commands:

sudo apt-get install audit

or

sudo yum install audit

The next step is to make sure that auditd is running, if command ps ax | grep [a]udit shows nothing then start auditd using command:

/etc/init.d/auditd start

As soon as auditd daemon is started we can start configuring it for tracking file changes using auditctl command.

Make auditd to log all file changes

auditctl -w /etc/passwd -k passwd-ra -p ra

This command will add a rule for auditd daemon to monitor file /etc/passwd file (see option -w /etc/passwd) for reading or changing the atributes (see option -p ra, where r is for read, a is for attribute). Also this command specifies filter key (-k passwd-ra) that will uniquely identify auditd records in its logs files.

Now let’s test this rule: optput the last 20 lines of /etc/passwd file and then search audit log for corresponding records

tail /etc/passwd

and then

[root@test artemn]# ausearch -k passwd-ra
----
time->Wed Jul  4 15:17:14 2012
type=CONFIG_CHANGE msg=audit(1341407834.821:207310): auid=500 ses=23783 op="add rule" key="passwd-ra" list=4 res=1
----
time->Wed Jul  4 15:17:20 2012
type=PATH msg=audit(1341407840.181:207311): item=0 name="/etc/passwd" inode=31982841 dev=09:02 mode=0100644 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1341407840.181:207311):  cwd="/home/artemn"
type=SYSCALL msg=audit(1341407840.181:207311): arch=c000003e syscall=2 success=yes exit=3 a0=7fffecd41817 a1=0 a2=0 a3=7fffecd40b40 items=1 ppid=642502 pid=521288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=23783 comm="tail" exe="/usr/bin/tail" key="passwd-ra"

As you can see the output of second command shows that auditd has one record for filter key ‘passwd-ra’, it shows that root user (uid=0 gid=0) has read file /etc/passwd using command tail (comm=”tail” exe=”/usr/bin/tail”) at July 4, 2012 (time->Wed Jul 4 15:17:20 2012).

Utility ausearch is pretty powerful so I recommend to read output of man ausearch, in the meantime here are some useful examples:

ausearch -x /bin/grep
ausearch -x rm

This approach allows to scan auditd records for certain executable, e.g. if you’d like to see if any of watched files was deleted (or not) using command rm then you should use second command of above two.

This one will show you all records for certain UID (username).

ausearch -ui 1000

Limit CPU usage of Linux process

cpulimit is a small program written in C that allows to limit CPU usage by Linux process. Limit is specified in percentage so it’s possible to prevent high CPU load generated by scripts, programs or processes.

I found cpulimit pretty useful for the scripts running from cron, for example I can do overnight backups and be sure that compression of 50GB file via gzip won’t eat all CPU resources and all other system processes will have enough CPU time.

In most of Linux distributions cpulimit is available from binary repositories so you can install it using commands:

sudo apt-get install cpulimit

or

sudo yum install cpulimit

If it’s not possible in your distro then it’s extremely easy to compile it:

cd /usr/src/
wget --no-check-certificate https://github.com/opsengine/cpulimit/tarball/master -O cpulimit.tar
tar -xvf cpulimit.tar
cd opsengine-cpulimit-9df7758
make
ln -s cpulimit /usr/sbin/cpulimit

From that moment you can run commands limited by CPU percentage, e.g. below command executes gzip compression so that gzip process will never step over 10% of CPU limit:

/usr/sbin/cpulimit --limit=10 /bin/gzip vzdump-openvz-102-2012_06_26-19_01_11.tar

You can check actual CPU usage by gzip using commands:

ps axu | grep [g]zip

or

top

Btw, the first command contains ‘grep [g]zip’ to avoid the last line in common output:

root    896448  10.0  3.1 159524  3528 ?        S    13:12   0:00 /usr/sbin/cpulimit --limit=10 /bin/gzip vzdump-openvz-102-2012_06_26-19_01_11.tar
root       26490  0.0  0.0   6364   708 pts/0    S+   15:24   0:00 grep gzip

Using cpulimit you can also allocate CPU limit to already running processes, e.g. below command will allocate 20% CPU limit to process with PID 2342:

/usr/sbin/cpulimit -p 2342 -l 20

It’s possible to specify process by its executable file instead of PID:

/usr/sbin/cpulimit -P /usr/sbin/nginx -l 30

Geolocation for Nagios

Some time ago I came across NagMap addon for Nagios and found it pretty helpful for monitoring multiple hosts around the world.

For example, there are some production servers in Europe, US and others in India and New Zealand and it’s much better see their states on the map rather than using boring Nagios host status list. Every host can have one of the following states based on ping statistics: green, yellow and red. Green/white (ok) status corresponds to 0-10% packet loss, yellow (warning) is 10-20% packet loss and red (critical) means the host is down or packet loss to it is more than 20%. All three states are shown on the map using different markers.

Using NagMap addon for Nagios it’s possible to create a map of the hosts and their states based on Google Maps, here is some part of my map:

Above screenshot shows all hosts in OK state (desired picture) so in case when some host goes down or becomes sluggish then you’ll see some red markers like this or (depending on type of the host).

Setup and configure NagMap

So first of all you need to download nagmap tarball from project’s download section and unpack it somewhere on the server that hosts Nagios monitoring system. Downloaded tarball contains PHP scripts which will access Nagios’s status file and show corresponding markers on the map using Google Maps. I suggest to create new subdir in directory where Nagios files are located:

cd /usr/share/nagios/
wget http://labs.shmu.org.uk/nagmap/nagmap-0.11.tar.gz
tar -xvzf nagmap-0.11.tar.gz
rm nagmap-0.11.tar.gz

Once unpacked the archive it’s necessary to set path to Nagios status file in Nagmap’s file status.php. In my case Nagios’s status.dat file is located at /var/nagios/status.dat so I have the following line in nagmap’s status.php:

$fp = fopen("/var/nagios/status.dat","r");

It’s natural that web server must have enough rights to read /var/nagios/status.dat file.

The next step is to set up geographical location for the hosts which should be shown at Nagmap. It should be specified in the following way:

define host {
        use generic-host
        host_name HostName1
        address 11.22.33.44
        notes latlng: 40.664167, -73.938611
        check_command check-host-alive
        register 1
}

Where “40.664167, -73.938611″ is longitude and latitude of the host (New York city in this example). So you should add ‘notes latlng:’ lines to all host in Nagios to see them on the map.

From this point you should be able to open the map, e.g. https://your.server.com/nagios/nagmap/ URL. If opened page is empty then there is some problem in reading or parsing status.dat file. Unfortunately nagmap doesn’t provide debug feature so you should open marker.php (e.g. https://your.server.com/nagios/nagmap/marker.php) and look into its output to see where’s the problem. Most probably you’ll need some basic PHP knowledge. Btw, file marker.php contains paths to marker images so you can easily change them from default there.

How to get 10046 Trace for Oracle Export and Import Utility

PURPOSE
——-

If you want to find out what happens when you run Oracle export or Oracle import utility is to set ON the Oracle trace event 10046. This will generate a trace file that can be used to find out the actual happenings behind the scene.

This is how you can get 10046 Trace for Export and Import Utilities

1] Run the Oracle export command export and let the program prompt you for the options.

$ exp

Enter user and password as below when prompted
Username: system
Password:

2] Open another window to the database server and login using sqlplus.

$ sqlplus system/manager

3] Now find out the SID of exp session

sql> select sid,program from
v$session where username = ‘SYSTEM’;

SID PROGRAM
———- ————————————————
788 exp@SERVER01 (TNS V1-V3)

4] Now find the PID and SPID for that session

sql> select s.sid, p.pid, p.spid
from v$session s, v$process p
where s.paddr = p.addr and s.sid = 10;

SID PID SPID
———- ———- ———
788 189 1076

SPID from the previous query is equivalent to OSPID (operating System process). This is the process that will be traced

5] Now exit from this session

Sql>exit

6] Generate a trace file for Procces ID 1076. To do that login as sys using sqlplus and run the commands (in bold)

$ sqlplus / as sysdba

SQL> oradebug setospid 1076
Oracle pid: 189, Unix process pid: 1076, image: oracle@SERVER01 (TNS V1-V3)

SQL> oradebug unlimit
Statement processed.

SQL> oradebug tracefile_name
/u01/app/oracle/diag/rdbms/dev/DEV1/trace/DEV1_ora_1076.trc

This gives the name of the trace file

SQL> oradebug Event 10046 trace name context forever, level 12;
Statement processed.

7] From the window where “exp” command was run, now export a table

8] From the SQL prompt of the window where logged in as “sys” user
Set the Trace off once you get the Required information or the error.

SQL> oradebug Event 10046 trace name context off;
ORA-00072: process “Unix process pid: 17370, image: oracle@ SERVER01″ is not active

SQL>exit

Now you have got the trace file which is
/u01/app/oracle/diag/rdbms/dev/DEV1/trace/DEV1_ora_1076.trc

Investigating Slow Oracle database performance

On many occasions you will face a scenario where the Oracle database performance is very slow. There are many reasons why the database performance can be slow. To investigate a slow performance problem, begin by deciding what diagnostics should be gathered. To do this, consider the following questions and take the appropriate action.

Database is Slow: Is the performance problem constant or does it occur at certain times of the day ?

* CONSTANT
o Gather an AWR or Statspack report for a period of time when the problem occurs (a 1 hour report is usually sufficient).
o If you have an historic report which covers the same time of day and period when the performance was OK then take that too.

* ONLY CERTAIN TIMES
o Gather an AWR or Statspack report for a period of time which covers the period when the problem exists (For instance, if you have a problem when something is run between 12 and 3 then make sure the report covers either that time or part of that time).
o Additionally, for comparison, gather an AWR or Statspack report for a similar period of time when the problem does not occur. Always ensure that you are making a fair comparison – for instance, the same time of day or the same workload and make sure the duration of the report is the same.

NOTE 1:- As much as possible statspack reports should be minimum 10 minutes, maximum 30 minutes. Longer periods can distort the information and reports should be re-gathered using a shorter time period. With AWR a 1hr report is OK, but for most performance issues a short 10-30 minute snapshot should be sufficient.

NOTE 2:- It is often prudent to use a matched ADDM report initially to give a pointer to the main issues. Reading the corresponding ADDM report as a first step to tuning can save a lot of time because it immediately points at the main user as compared to trying to understand what an AWR report is presenting.

NOTE 3:- If SQL performance is suspected as the cause of the slowness then collect an ASH report for the same period. If a specific SQL is suspected of slowness then run an ASH report just for that SQLID and also look at using SQLTXplain to diagnose issues with that statement.

Database is Slow: Does the problem affect one session, several sessions or all sessions ?

* ONE SESSION – Gather 10046 trace for the session.
* SEVERAL SESSIONS – Gather 10046 trace for one or two of the problem sessions
* ALL SESSIONS – Gather AWR or Statspack reports

Database Hangs: Does a particular Session ”appear” to hang or do several sessions or all sessions hang?

Please collect the following diagnostics according to the specific scenario:

When only one session appears to be ‘hung’

* Gather 10046 trace for the session.
* Get a few errorstacks for the session
* Gather an AWR (or Statspack) report for a period of time when the problem occurs (a 1 hour report is usually sufficient).

When more than one session appears to be ‘hung’

* Gather 10046 trace for one or two of the problem sessions
* Get a few errorstacks for one or two of the problem sessions
* Gather an AWR (or Statspack) report for a period of time when the problem occurs (a 1 hour report is usually sufficient).

When most of the sessions appears to be ‘hung’ treat this as Database Hang

Oracle PL/SQL performance tuning using BULK COLLECT and FORALL

Here I’ll show you, in Oracle database, how to do performance tuning of your pl/sql code using BULK COLLECT and FORALL. Using BULK COLLECT and FORALL instead of normal FOR LOOP I have achieved significant performance benefit. In some cases I was able to tune the performance from 14 minutes to just 10 seconds.

You may already know that in Oracle database the PL/SQL code is executed by PL/SQL engine and SQL is executed by SQL engine. Also you know that SQL is embedded with PL/SQL. When PL/SQL engine encounters SQL code, it passes control to SQL engine for execution. This is called context switching between SQL and PL/SQL.

For example see the code below:

FOR i in 1..1000 LOOP
insert into emp values (…);
END LOOP;

In the above code when PL/SQL engine is executing the for loop, it needs to execute the INSERT statement 1000 times. This also means there were 1000 context switching. This generally degrades performance and causes longer execution time. Instead of context switching 1000 times, by using BULK COLLECT and FORALL method, the same operation can be achieved with just a single context switch. So that is where you can achieve significant performance benefit.

The test case here copies 1,000,000 records from a simple source table to a destination table. The source and destination tables have same structure. The only difference is that the source table has a primary key.

Here is the code that I used to create the source table called t_source and the destination table called t_dest:

create table t_source(
empno number,
ename varchar2(10),
joindate date);

alter table t_source add constraint pk_src primary key (empno);

create table t_dest as select * from t_source;

Then I created 1,000,000 records in t_source using this simple code

declare
begin
for i in 1..1000000 loop
insert into t_source values (i, ‘emp’||i, sysdate);
end loop;
commit;
end;
/

After inserting source data, I analyzed the table so that the optimizer has the latest statistics about the source table. That is to help Oracle as much as possible to find out the best way to get the data from t_source table. This may be not necessary for a simple table such as this. But depending on the number of rows and number of columns, it may help enormously.

Anyway here is the code to gather table stats.

EXEC DBMS_STATS.gather_table_stats(‘JOE’, ‘T_SOURCE’);

Then lets set the SQL environment so that we can see SQL execution time. Run this from SQL prompt:

SET TIMING ON;

Now I am going to populate t_dest using the usual FOR LOOP which is a very common method used by programmers. The code basically gets data into a cursor and then read each record from the cursor and inserts into t_dest table.

declare
cursor c1 is
select * from t_source;
begin
for src_rec in c1 loop
insert into t_dest values (src_rec.empno, src_rec.ename, src_rec.joindate);
end loop;
commit;
end;

Once the execution is complete you will get the output something like this.

PL/SQL procedure successfully completed.

Elapsed: 00:02:40.12
SQL>

Depending on the speed of your machine and database parameters, your code may take more time than you can see here or it may take less time. We are going to need this timing to compare against code that uses BULK COLLECT and FORALL.

Now run this code from your SQL*PLUS session

truncate table t_dest;

declare
cursor c1 is
select * from t_source;
TYPE src_tab IS TABLE OF t_source%ROWTYPE INDEX BY BINARY_INTEGER;
rec_tab src_tab;

begin
open c1;
fetch c1 BULK COLLECT INTO rec_tab limit 10000;
WHILE rec_tab.COUNT > 0 LOOP
FORALL i IN 1..rec_tab.COUNT
INSERT INTO t_dest (empno, ename, joindate) VALUES (rec_tab(i).empno,rec_tab(i).ename,rec_tab(i).joindate);
fetch c1 BULK COLLECT INTO rec_tab limit 10000;
END LOOP;
CLOSE c1;
end;

The output should shoething like this:

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.13
SQL>

As you can see the time to insert 1,000,000 records from source to destination has decreased from 00:02:40.12 to just 00:00:16.13. This is a huge performance gain. When dealing with millions of records the performance benefit may be tremendous.

Using oracle hanganalyze tool of oradebug utility to analyze oracle hang

Sometime you may find that your SQL session is hanging or some application that accesses your database is hanging. The application that access an Oracle database basically creates a session to the database. And it is that session that will be hanging which to normal usr looks like the application is hanging.

Whether it is the SQL session/statement or an application that is hanging sometime it becomes difficult to find out what is causing the database to hang.

You may use the usual tools such as AWR (automatic workload repository) and ADDM (automatic database diagnostic monitor) to find out what is causing the database to hang. But many time the information you receive from this tool is not enough to determine what is causing it or to identify which is causing it.

Moreover if your database instance is hanging then you can not run AWR and ADDM.

In such case you can use a tool called hanganalyze. This tool is provided by oracle and is part of oradebug utility. This is very handy to find out exactly which session is causing the hang situation.

Here are the steps how you to you oracle hanganalyze tool:

Preparation:
Login as sysdba and run the following:

sqlplus / as sysdba
create user joe identified by joe;
grant create session to joe;
grant resource to joe;

Simulate a session hang (in this case row locking):
Login as user joe from two sessions. Lets refer them session 1 and session 2.

From session 1 do the following:

create table dept (
deptno number,
dname varchar2(20),
location varchar2(20));

insert into dept values (10, ‘Finance’,’London’);
insert into dept values (20, ‘HR’,’London’);

commit;

SQL> select * from dept;

DEPTNO DNAME LOCATION
———- ——————– ——————–
10 Finance London
20 HR London

update dept
set location=’New York’
where deptno = 10;

Now from session 2 try to update the same data by executing the code below:

update dept
set location=’Singapore’
where deptno = 10;

You will notice that the session is hanging. It is because the first update in session 1 has updated the same tow and the operation is not yet committed. So provide data consistency session 1 has locked the row and hence now session 2 needs to wait until session 1 commits or rollback;

Now run hanganalyze to find out exactly which session is blocking whom.
Login as sys user and find out the spid of the session that is hanged.

sqlplus / as sysdba

select a.sid, a.serial#, b.spid ospid, to_char(logon_time,’dd-Mon-rr hh24:mi’) Logintime
from gv$session a, gv$process b
where a.inst_id = b.inst_id and a.paddr = b.addr and status = ‘ACTIVE’;

Note down the ospid of the hanged session. Then run the following:

SQL> oradebug setospid [spid]
SQL> oradebug unlimit;
SQL> oradebug hanganalyze 3

This will generate a trace file which will contain the detail why the oracle database session is hanging. Even though the trace file will contain loads of information still it is quite easy to find out the real culprit.

Open the trace file and find out the session marked as “Chains most likely to have caused the hang:”. There you will see Chain 1 and Chain 2 which are basically session operations.
Going further down where these Chains are defined, you will see information such as session, serial#, blocking session, current SQL etc.

Soem extract of my trace file is as below:

Chains most likely to have caused the hang:
[a] Chain 1 Signature: ‘SQL*Net message from client’
Chain 1 Signature Hash: 0x38c48850
[b] Chain 2 Signature: ‘LNS ASYNC end of log’
Chain 2 Signature Hash: 0x8ceed34f

and section that provide chain detail

Chain 1:
——————————————————————————-
Oracle session identified by:
{
instance: 1 (dtcnmh.dtcnmh)
os id: 2653
process id: 36, oracle@SERV01A (TNS V1-V3)
session id: 770
session serial #: 11739
}
is waiting for ‘enq: TX – row lock contention’ with wait info:
{
p1: ‘name|mode’=0×54580006
p2: ‘usn
p3: ‘sequence’=0x507
time in wait: 59.699862 sec
timeout after: never
wait id: 26
blocking: 0 sessions
current sql: update dept
set location=’Singapore’

I hope this helps.

Performing Oracle RMAN Cold backup

Cold backup is done when database is not open. While performing cold backup using RMAN the database need to be in MOUNT mode. This is because RMAN needs the data file details which are available while database is in MOUNT mode.

Cold backup is also called consistent backup. This is because before bringing the database in MOUNT mode, the database is first shutdown with IMMEDIATE or TRANSACTIONAL option. This means that database sequence number are all synchronised.

Below are the steps to do RMAN cold backup to an external drive in Linux.

Step1: Mount the external drive
Step2: Run rman_cold_backup.sh in nohup.

The content of rman_cold_backup.sh is below:

#!/bin/sh
# Set Oracle Environment for DB
#. ~/.bashrc
. ~/.bash_profile

echo
echo “`date` – Started `basename $0`”
echo

export NLS_DATE_FORMAT=’DD-MON-YY HH24:MI:SS’
rman target /
run {
shutdown immediate;
startup mount;
allocate channel prmy1 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy2 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy3 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy4 type disk format ‘/media/full_backup_%d_%s_%p’;
BACKUP CURRENT CONTROLFILE format ‘/media/ctrl_file_%d_%s_%p’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
release channel prmy1;
release channel prmy2;
release channel prmy3;
release channel prmy4;
alter database open;
}
EOF
if [ $? -eq 0 ]
then
echo “=====================”
echo “RMAN Backup Completed”
echo “=====================”
else
echo “==================”
echo “RMAN Backup Failed”
echo “==================”
exit 1
fi

echo
echo “`date` – Finished `basename $0`”
echo

As this backup is a cold backup the, the archive log files will be not required to restore and recover the database. Hence archive log files are not added.

However if you wish to take backup of arrive logs as well you can change

BACKUP AS COMPRESSED BACKUPSET DATABASE;

To

BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;

In the above script.

Remember to run the script in nohup.
Also remember to check there is enough space in the external drive.