Track file changes using auditd

Most of Linux distributions comes with Linux Auditing System that makes it possible to track file changes, file accesses as well as system calls. It’s pretty useful functionality for sysadmins who wish to know who and when accessed and/or changed sensitive files like /etc/passwd, /etc/sudoers or others.

Daemon auditd that usually runs in background and starts after reboot by default logs those events into /var/log/audit.log file (or into other file if different syslog facility is specified). The common usage is to list all files which should be watched and search auditd’s logs from time to time. For example, I prefer to track any file changes into /etc/passwd, reading/writing of /etc/sudoers, executing of /bin/some/binary or just everything (read, write, attributes changes, executing) for my /very/important/file.

In order to configure that you’ll need two commands: auditctl and ausearch. First one is for configuring auditd daemon (e.g. setting a watch on a file), second one is for searching auditd logs (it’s possible to use grep against /var/log/audit.log too but ausearch command makes this task easier).

Install and start Linux Auditing System

If it happened that auditd daemon isn’t installed in your system then you can fix this by one of below commands:

sudo apt-get install audit

or

sudo yum install audit

The next step is to make sure that auditd is running, if command ps ax | grep [a]udit shows nothing then start auditd using command:

/etc/init.d/auditd start

As soon as auditd daemon is started we can start configuring it for tracking file changes using auditctl command.

Make auditd to log all file changes

auditctl -w /etc/passwd -k passwd-ra -p ra

This command will add a rule for auditd daemon to monitor file /etc/passwd file (see option -w /etc/passwd) for reading or changing the atributes (see option -p ra, where r is for read, a is for attribute). Also this command specifies filter key (-k passwd-ra) that will uniquely identify auditd records in its logs files.

Now let’s test this rule: optput the last 20 lines of /etc/passwd file and then search audit log for corresponding records

tail /etc/passwd

and then

[root@test artemn]# ausearch -k passwd-ra
----
time->Wed Jul  4 15:17:14 2012
type=CONFIG_CHANGE msg=audit(1341407834.821:207310): auid=500 ses=23783 op="add rule" key="passwd-ra" list=4 res=1
----
time->Wed Jul  4 15:17:20 2012
type=PATH msg=audit(1341407840.181:207311): item=0 name="/etc/passwd" inode=31982841 dev=09:02 mode=0100644 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1341407840.181:207311):  cwd="/home/artemn"
type=SYSCALL msg=audit(1341407840.181:207311): arch=c000003e syscall=2 success=yes exit=3 a0=7fffecd41817 a1=0 a2=0 a3=7fffecd40b40 items=1 ppid=642502 pid=521288 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=23783 comm="tail" exe="/usr/bin/tail" key="passwd-ra"

As you can see the output of second command shows that auditd has one record for filter key ‘passwd-ra’, it shows that root user (uid=0 gid=0) has read file /etc/passwd using command tail (comm=”tail” exe=”/usr/bin/tail”) at July 4, 2012 (time->Wed Jul 4 15:17:20 2012).

Utility ausearch is pretty powerful so I recommend to read output of man ausearch, in the meantime here are some useful examples:

ausearch -x /bin/grep
ausearch -x rm

This approach allows to scan auditd records for certain executable, e.g. if you’d like to see if any of watched files was deleted (or not) using command rm then you should use second command of above two.

This one will show you all records for certain UID (username).

ausearch -ui 1000

Limit CPU usage of Linux process

cpulimit is a small program written in C that allows to limit CPU usage by Linux process. Limit is specified in percentage so it’s possible to prevent high CPU load generated by scripts, programs or processes.

I found cpulimit pretty useful for the scripts running from cron, for example I can do overnight backups and be sure that compression of 50GB file via gzip won’t eat all CPU resources and all other system processes will have enough CPU time.

In most of Linux distributions cpulimit is available from binary repositories so you can install it using commands:

sudo apt-get install cpulimit

or

sudo yum install cpulimit

If it’s not possible in your distro then it’s extremely easy to compile it:

cd /usr/src/
wget --no-check-certificate https://github.com/opsengine/cpulimit/tarball/master -O cpulimit.tar
tar -xvf cpulimit.tar
cd opsengine-cpulimit-9df7758
make
ln -s cpulimit /usr/sbin/cpulimit

From that moment you can run commands limited by CPU percentage, e.g. below command executes gzip compression so that gzip process will never step over 10% of CPU limit:

/usr/sbin/cpulimit --limit=10 /bin/gzip vzdump-openvz-102-2012_06_26-19_01_11.tar

You can check actual CPU usage by gzip using commands:

ps axu | grep [g]zip

or

top

Btw, the first command contains ‘grep [g]zip’ to avoid the last line in common output:

root    896448  10.0  3.1 159524  3528 ?        S    13:12   0:00 /usr/sbin/cpulimit --limit=10 /bin/gzip vzdump-openvz-102-2012_06_26-19_01_11.tar
root       26490  0.0  0.0   6364   708 pts/0    S+   15:24   0:00 grep gzip

Using cpulimit you can also allocate CPU limit to already running processes, e.g. below command will allocate 20% CPU limit to process with PID 2342:

/usr/sbin/cpulimit -p 2342 -l 20

It’s possible to specify process by its executable file instead of PID:

/usr/sbin/cpulimit -P /usr/sbin/nginx -l 30

Geolocation for Nagios

Some time ago I came across NagMap addon for Nagios and found it pretty helpful for monitoring multiple hosts around the world.

For example, there are some production servers in Europe, US and others in India and New Zealand and it’s much better see their states on the map rather than using boring Nagios host status list. Every host can have one of the following states based on ping statistics: green, yellow and red. Green/white (ok) status corresponds to 0-10% packet loss, yellow (warning) is 10-20% packet loss and red (critical) means the host is down or packet loss to it is more than 20%. All three states are shown on the map using different markers.

Using NagMap addon for Nagios it’s possible to create a map of the hosts and their states based on Google Maps, here is some part of my map:

Above screenshot shows all hosts in OK state (desired picture) so in case when some host goes down or becomes sluggish then you’ll see some red markers like this or (depending on type of the host).

Setup and configure NagMap

So first of all you need to download nagmap tarball from project’s download section and unpack it somewhere on the server that hosts Nagios monitoring system. Downloaded tarball contains PHP scripts which will access Nagios’s status file and show corresponding markers on the map using Google Maps. I suggest to create new subdir in directory where Nagios files are located:

cd /usr/share/nagios/
wget http://labs.shmu.org.uk/nagmap/nagmap-0.11.tar.gz
tar -xvzf nagmap-0.11.tar.gz
rm nagmap-0.11.tar.gz

Once unpacked the archive it’s necessary to set path to Nagios status file in Nagmap’s file status.php. In my case Nagios’s status.dat file is located at /var/nagios/status.dat so I have the following line in nagmap’s status.php:

$fp = fopen("/var/nagios/status.dat","r");

It’s natural that web server must have enough rights to read /var/nagios/status.dat file.

The next step is to set up geographical location for the hosts which should be shown at Nagmap. It should be specified in the following way:

define host {
        use generic-host
        host_name HostName1
        address 11.22.33.44
        notes latlng: 40.664167, -73.938611
        check_command check-host-alive
        register 1
}

Where “40.664167, -73.938611″ is longitude and latitude of the host (New York city in this example). So you should add ‘notes latlng:’ lines to all host in Nagios to see them on the map.

From this point you should be able to open the map, e.g. https://your.server.com/nagios/nagmap/ URL. If opened page is empty then there is some problem in reading or parsing status.dat file. Unfortunately nagmap doesn’t provide debug feature so you should open marker.php (e.g. https://your.server.com/nagios/nagmap/marker.php) and look into its output to see where’s the problem. Most probably you’ll need some basic PHP knowledge. Btw, file marker.php contains paths to marker images so you can easily change them from default there.