Performing Oracle RMAN Cold backup

Cold backup is done when database is not open. While performing cold backup using RMAN the database need to be in MOUNT mode. This is because RMAN needs the data file details which are available while database is in MOUNT mode.

Cold backup is also called consistent backup. This is because before bringing the database in MOUNT mode, the database is first shutdown with IMMEDIATE or TRANSACTIONAL option. This means that database sequence number are all synchronised.

Below are the steps to do RMAN cold backup to an external drive in Linux.

Step1: Mount the external drive
Step2: Run rman_cold_backup.sh in nohup.

The content of rman_cold_backup.sh is below:

#!/bin/sh
# Set Oracle Environment for DB
#. ~/.bashrc
. ~/.bash_profile

echo
echo “`date` – Started `basename $0`”
echo

export NLS_DATE_FORMAT=’DD-MON-YY HH24:MI:SS’
rman target /
run {
shutdown immediate;
startup mount;
allocate channel prmy1 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy2 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy3 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy4 type disk format ‘/media/full_backup_%d_%s_%p’;
BACKUP CURRENT CONTROLFILE format ‘/media/ctrl_file_%d_%s_%p’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
release channel prmy1;
release channel prmy2;
release channel prmy3;
release channel prmy4;
alter database open;
}
EOF
if [ $? -eq 0 ]
then
echo “=====================”
echo “RMAN Backup Completed”
echo “=====================”
else
echo “==================”
echo “RMAN Backup Failed”
echo “==================”
exit 1
fi

echo
echo “`date` – Finished `basename $0`”
echo

As this backup is a cold backup the, the archive log files will be not required to restore and recover the database. Hence archive log files are not added.

However if you wish to take backup of arrive logs as well you can change

BACKUP AS COMPRESSED BACKUPSET DATABASE;

To

BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;

In the above script.

Remember to run the script in nohup.
Also remember to check there is enough space in the external drive.

Linux Restricted Shells: rssh and scponly

Restricted shells like rssh and scponly give sysadmin the possibility to limit the operations that Linux user can do, for example you can create user that will be allowed to copy files via scp but won’t be permitted to login into system’s command line. This is quite important security feature that should be considered by every sysadmin to prevent unauthorized activity by users for example over SSH.

If you have some online storage that is used for uploading backup data over scp or rsync/ssh from remote hosts then it is highly recommended to use restricted shells for those incoming connections and make sure that even if the attacker has got username/password (or key) then he (or she!) won’t be able to break into your system.

scponly is extremely simple restricted shell, user account that has scponly binary as its shell won’t be able to do anything except transfer data from remote host via scp protocol or via rsync/scp. rssh provides little bit more features: you can limit users to use selected protocols like scp, sftp, rsync, cvs or rdist either in chroot environment or not.

Installation

I prefer using yum or aptitude to install such kind of software like rssh or scponly so the fastest way is to try one of below commands depending on your needs:

apt-get install rssh
apt-get install scponly
yum install rssh
yum install scponly

If there are problems to find desired restricted shell in your Linux distro’s repository then you should download sources and do some ./configure, make and make install. Here are the links: latest rssh .tar.gz, latest scponly .tgz.

Configuration

scponly doesn’t need any configuration and works out of the box so you just should set it as a shell for user account. Here are some examples.

Create new user account with scponly as shell:

useradd -s /usr/sbin/scponly user1

Modify user account to set rssh as a shell:

usermod -s /usr/sbin/rssh user2

Where /usr/sbin/scponly is binary executable of scponly.

rssh comes with text configuration file usually stored in /etc/rssh.conf. You can either setup per-user settings there or configure global restrictions for all accounts which are using rssh. Default rssh.conf file is well commented so there shouldn’t be any problems to configure rssh as you needs. At the same time, here are some examples.

If you wish to restrict all users to scp and rsync only then you should uncomment lines in rssh.conf like below:

allowscp
#allowsftp
#allowcvs
#allowrdist
allowrsync

Now coming to per-user examples. User peter is allowed to use scp protocol only, the following line in rssh.conf will do that:

user=sbk:022:00001:

User ann is allowed to scp, rsync only:

user=sbk:022:10001:

As you can see enabled protocols in per-user setup are specified as 11000 (scp, sftp), 11111 (scp, sftp, cvs, rdist, rsync) or 00000 (no protocols enabled). 022 in above examples specifies umask.

Testing

Let’s assume you’ve created user1 and enabled only scp and rsync using rssh. An attempt to access the server via SSH under user1 account will end with the following output:

artiomix$ ssh user1@1.2.3.4
user1@1.2.3.4's password: 
 
This account is restricted by rssh.
Allowed commands: scp rsync
 
If you believe this is in error, please contact your system administrator.
 
Connection to 1.2.3.4 closed.

At the same time scp transfers will work without problems:

artiomix$ scp -P 23451 /etc/test.file user1@1.2.3.4:/tmp
user1@1.2.3.4's password:
test.file                             100%  983     1.0KB/s   00:00

Further Reading

rssh support chroot environments for scp, rsync and other transfer protocols. It means that you can restrict users not only by command they can use but also by filesystems they reach. For example, user1 can be chrooted to /chroot_user1 so it can’t be used to copy something from /etc or /var/www directories of the server. Here is nice manual about chroot in rssh.

Failover and Load Balancing using HAProxy

HAProxy sample topology

HAProxy is open source proxy that can be used to enable high availability and load balancing for web applications. It was designed especially for high load projects so it is very fast and predictable, HAProxy is based on single-process model.

In this post I’ll describe sample setup of HAProxy: users’ requests are load balanced between two web servers Web1 and Web1, if one of them goes down then all the request are processed by alive server, once dead servers recovers load balancing enables again. See topology to the right.

Installation

HAProxy is included into repositories for major Linux distributions, so if you’re using Centos, Redhat or Fedora type the following command:

yum install haproxy

If you’re Ubuntu, Debian or Linux Mint user use this one instead:

apt-get install haproxy

Configuration

As soon as HAProxy is installed it’s time to edit its configuration file, usually it’s placed in /etc/haproxy/haproxy.cfg. Official documentation for HAProxy 1.4 (stable) is here.

Here is configuration file to implement setup shown at the diagram and described above:

global
        user daemon
        group daemon
        daemon
        log 127.0.0.1 daemon
 
listen http
        bind 1.2.3.4:80
        mode http
        option tcplog
 
        log global
        option dontlognull
 
        balance roundrobin
        clitimeout 60000
        srvtimeout 60000
        contimeout 5000
        retries 3
        server web1 web1.example.com:80 check
        server web2 web2.example.com:80 check
        cookie web1 insert nocache
        cookie web2 insert nocache

Let’s stop on most important parts of this configuration file. Section global specifies user and group which will be used to run haproxy process (daemon in our example). Line daemon tells HAProxy to run in background, log 127.0.0.1 daemon specifies syslog facility for sending logs from HAProxy.

Section listen http contains line bind 1.2.3.4:80 that specifies IP address and port that will be used to accept users’ requests (they will be load balanced between Web1 and Web2). Line mode http means that HAProxy will filter all requests different from HTTP and will do load balancing over HTTP protocol.

Line balance roundrobin specifies load balancing algorithm according to which each web server (Web1 and Web2) will be used in turns according to their weights. In our example weights for both servers are the same so load balancing is fair.

Lines server web1 … and server web2 … specify web servers for load balancing and failover, in our case they are load balanced according to round robin algorithm and have the same priority/weight.

The last two lines in configuration files are optional, they makes it possible to preserve cookies, it means for example that if you logged in to web application hosted at Web1 and then HAProxy forwarded your next request to Web2 you will still have logged in session opened as cookies with session id from Web1 will be sent to you from Web2 as well.