Oracle PL/SQL performance tuning using BULK COLLECT and FORALL

Here I’ll show you, in Oracle database, how to do performance tuning of your pl/sql code using BULK COLLECT and FORALL. Using BULK COLLECT and FORALL instead of normal FOR LOOP I have achieved significant performance benefit. In some cases I was able to tune the performance from 14 minutes to just 10 seconds.

You may already know that in Oracle database the PL/SQL code is executed by PL/SQL engine and SQL is executed by SQL engine. Also you know that SQL is embedded with PL/SQL. When PL/SQL engine encounters SQL code, it passes control to SQL engine for execution. This is called context switching between SQL and PL/SQL.

For example see the code below:

FOR i in 1..1000 LOOP
insert into emp values (…);
END LOOP;

In the above code when PL/SQL engine is executing the for loop, it needs to execute the INSERT statement 1000 times. This also means there were 1000 context switching. This generally degrades performance and causes longer execution time. Instead of context switching 1000 times, by using BULK COLLECT and FORALL method, the same operation can be achieved with just a single context switch. So that is where you can achieve significant performance benefit.

The test case here copies 1,000,000 records from a simple source table to a destination table. The source and destination tables have same structure. The only difference is that the source table has a primary key.

Here is the code that I used to create the source table called t_source and the destination table called t_dest:

create table t_source(
empno number,
ename varchar2(10),
joindate date);

alter table t_source add constraint pk_src primary key (empno);

create table t_dest as select * from t_source;

Then I created 1,000,000 records in t_source using this simple code

declare
begin
for i in 1..1000000 loop
insert into t_source values (i, ‘emp’||i, sysdate);
end loop;
commit;
end;
/

After inserting source data, I analyzed the table so that the optimizer has the latest statistics about the source table. That is to help Oracle as much as possible to find out the best way to get the data from t_source table. This may be not necessary for a simple table such as this. But depending on the number of rows and number of columns, it may help enormously.

Anyway here is the code to gather table stats.

EXEC DBMS_STATS.gather_table_stats(‘JOE’, ‘T_SOURCE’);

Then lets set the SQL environment so that we can see SQL execution time. Run this from SQL prompt:

SET TIMING ON;

Now I am going to populate t_dest using the usual FOR LOOP which is a very common method used by programmers. The code basically gets data into a cursor and then read each record from the cursor and inserts into t_dest table.

declare
cursor c1 is
select * from t_source;
begin
for src_rec in c1 loop
insert into t_dest values (src_rec.empno, src_rec.ename, src_rec.joindate);
end loop;
commit;
end;

Once the execution is complete you will get the output something like this.

PL/SQL procedure successfully completed.

Elapsed: 00:02:40.12
SQL>

Depending on the speed of your machine and database parameters, your code may take more time than you can see here or it may take less time. We are going to need this timing to compare against code that uses BULK COLLECT and FORALL.

Now run this code from your SQL*PLUS session

truncate table t_dest;

declare
cursor c1 is
select * from t_source;
TYPE src_tab IS TABLE OF t_source%ROWTYPE INDEX BY BINARY_INTEGER;
rec_tab src_tab;

begin
open c1;
fetch c1 BULK COLLECT INTO rec_tab limit 10000;
WHILE rec_tab.COUNT > 0 LOOP
FORALL i IN 1..rec_tab.COUNT
INSERT INTO t_dest (empno, ename, joindate) VALUES (rec_tab(i).empno,rec_tab(i).ename,rec_tab(i).joindate);
fetch c1 BULK COLLECT INTO rec_tab limit 10000;
END LOOP;
CLOSE c1;
end;

The output should shoething like this:

PL/SQL procedure successfully completed.

Elapsed: 00:00:16.13
SQL>

As you can see the time to insert 1,000,000 records from source to destination has decreased from 00:02:40.12 to just 00:00:16.13. This is a huge performance gain. When dealing with millions of records the performance benefit may be tremendous.

Using oracle hanganalyze tool of oradebug utility to analyze oracle hang

Sometime you may find that your SQL session is hanging or some application that accesses your database is hanging. The application that access an Oracle database basically creates a session to the database. And it is that session that will be hanging which to normal usr looks like the application is hanging.

Whether it is the SQL session/statement or an application that is hanging sometime it becomes difficult to find out what is causing the database to hang.

You may use the usual tools such as AWR (automatic workload repository) and ADDM (automatic database diagnostic monitor) to find out what is causing the database to hang. But many time the information you receive from this tool is not enough to determine what is causing it or to identify which is causing it.

Moreover if your database instance is hanging then you can not run AWR and ADDM.

In such case you can use a tool called hanganalyze. This tool is provided by oracle and is part of oradebug utility. This is very handy to find out exactly which session is causing the hang situation.

Here are the steps how you to you oracle hanganalyze tool:

Preparation:
Login as sysdba and run the following:

sqlplus / as sysdba
create user joe identified by joe;
grant create session to joe;
grant resource to joe;

Simulate a session hang (in this case row locking):
Login as user joe from two sessions. Lets refer them session 1 and session 2.

From session 1 do the following:

create table dept (
deptno number,
dname varchar2(20),
location varchar2(20));

insert into dept values (10, ‘Finance’,’London’);
insert into dept values (20, ‘HR’,’London’);

commit;

SQL> select * from dept;

DEPTNO DNAME LOCATION
———- ——————– ——————–
10 Finance London
20 HR London

update dept
set location=’New York’
where deptno = 10;

Now from session 2 try to update the same data by executing the code below:

update dept
set location=’Singapore’
where deptno = 10;

You will notice that the session is hanging. It is because the first update in session 1 has updated the same tow and the operation is not yet committed. So provide data consistency session 1 has locked the row and hence now session 2 needs to wait until session 1 commits or rollback;

Now run hanganalyze to find out exactly which session is blocking whom.
Login as sys user and find out the spid of the session that is hanged.

sqlplus / as sysdba

select a.sid, a.serial#, b.spid ospid, to_char(logon_time,’dd-Mon-rr hh24:mi’) Logintime
from gv$session a, gv$process b
where a.inst_id = b.inst_id and a.paddr = b.addr and status = ‘ACTIVE’;

Note down the ospid of the hanged session. Then run the following:

SQL> oradebug setospid [spid]
SQL> oradebug unlimit;
SQL> oradebug hanganalyze 3

This will generate a trace file which will contain the detail why the oracle database session is hanging. Even though the trace file will contain loads of information still it is quite easy to find out the real culprit.

Open the trace file and find out the session marked as “Chains most likely to have caused the hang:”. There you will see Chain 1 and Chain 2 which are basically session operations.
Going further down where these Chains are defined, you will see information such as session, serial#, blocking session, current SQL etc.

Soem extract of my trace file is as below:

Chains most likely to have caused the hang:
[a] Chain 1 Signature: ‘SQL*Net message from client’
Chain 1 Signature Hash: 0x38c48850
[b] Chain 2 Signature: ‘LNS ASYNC end of log’
Chain 2 Signature Hash: 0x8ceed34f

and section that provide chain detail

Chain 1:
——————————————————————————-
Oracle session identified by:
{
instance: 1 (dtcnmh.dtcnmh)
os id: 2653
process id: 36, oracle@SERV01A (TNS V1-V3)
session id: 770
session serial #: 11739
}
is waiting for ‘enq: TX – row lock contention’ with wait info:
{
p1: ‘name|mode’=0×54580006
p2: ‘usn
p3: ‘sequence’=0x507
time in wait: 59.699862 sec
timeout after: never
wait id: 26
blocking: 0 sessions
current sql: update dept
set location=’Singapore’

I hope this helps.

Performing Oracle RMAN Cold backup

Cold backup is done when database is not open. While performing cold backup using RMAN the database need to be in MOUNT mode. This is because RMAN needs the data file details which are available while database is in MOUNT mode.

Cold backup is also called consistent backup. This is because before bringing the database in MOUNT mode, the database is first shutdown with IMMEDIATE or TRANSACTIONAL option. This means that database sequence number are all synchronised.

Below are the steps to do RMAN cold backup to an external drive in Linux.

Step1: Mount the external drive
Step2: Run rman_cold_backup.sh in nohup.

The content of rman_cold_backup.sh is below:

#!/bin/sh
# Set Oracle Environment for DB
#. ~/.bashrc
. ~/.bash_profile

echo
echo “`date` – Started `basename $0`”
echo

export NLS_DATE_FORMAT=’DD-MON-YY HH24:MI:SS’
rman target /
run {
shutdown immediate;
startup mount;
allocate channel prmy1 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy2 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy3 type disk format ‘/media/full_backup_%d_%s_%p’;
allocate channel prmy4 type disk format ‘/media/full_backup_%d_%s_%p’;
BACKUP CURRENT CONTROLFILE format ‘/media/ctrl_file_%d_%s_%p’;
BACKUP AS COMPRESSED BACKUPSET DATABASE;
release channel prmy1;
release channel prmy2;
release channel prmy3;
release channel prmy4;
alter database open;
}
EOF
if [ $? -eq 0 ]
then
echo “=====================”
echo “RMAN Backup Completed”
echo “=====================”
else
echo “==================”
echo “RMAN Backup Failed”
echo “==================”
exit 1
fi

echo
echo “`date` – Finished `basename $0`”
echo

As this backup is a cold backup the, the archive log files will be not required to restore and recover the database. Hence archive log files are not added.

However if you wish to take backup of arrive logs as well you can change

BACKUP AS COMPRESSED BACKUPSET DATABASE;

To

BACKUP AS COMPRESSED BACKUPSET DATABASE PLUS ARCHIVELOG;

In the above script.

Remember to run the script in nohup.
Also remember to check there is enough space in the external drive.

Linux Restricted Shells: rssh and scponly

Restricted shells like rssh and scponly give sysadmin the possibility to limit the operations that Linux user can do, for example you can create user that will be allowed to copy files via scp but won’t be permitted to login into system’s command line. This is quite important security feature that should be considered by every sysadmin to prevent unauthorized activity by users for example over SSH.

If you have some online storage that is used for uploading backup data over scp or rsync/ssh from remote hosts then it is highly recommended to use restricted shells for those incoming connections and make sure that even if the attacker has got username/password (or key) then he (or she!) won’t be able to break into your system.

scponly is extremely simple restricted shell, user account that has scponly binary as its shell won’t be able to do anything except transfer data from remote host via scp protocol or via rsync/scp. rssh provides little bit more features: you can limit users to use selected protocols like scp, sftp, rsync, cvs or rdist either in chroot environment or not.

Installation

I prefer using yum or aptitude to install such kind of software like rssh or scponly so the fastest way is to try one of below commands depending on your needs:

apt-get install rssh
apt-get install scponly
yum install rssh
yum install scponly

If there are problems to find desired restricted shell in your Linux distro’s repository then you should download sources and do some ./configure, make and make install. Here are the links: latest rssh .tar.gz, latest scponly .tgz.

Configuration

scponly doesn’t need any configuration and works out of the box so you just should set it as a shell for user account. Here are some examples.

Create new user account with scponly as shell:

useradd -s /usr/sbin/scponly user1

Modify user account to set rssh as a shell:

usermod -s /usr/sbin/rssh user2

Where /usr/sbin/scponly is binary executable of scponly.

rssh comes with text configuration file usually stored in /etc/rssh.conf. You can either setup per-user settings there or configure global restrictions for all accounts which are using rssh. Default rssh.conf file is well commented so there shouldn’t be any problems to configure rssh as you needs. At the same time, here are some examples.

If you wish to restrict all users to scp and rsync only then you should uncomment lines in rssh.conf like below:

allowscp
#allowsftp
#allowcvs
#allowrdist
allowrsync

Now coming to per-user examples. User peter is allowed to use scp protocol only, the following line in rssh.conf will do that:

user=sbk:022:00001:

User ann is allowed to scp, rsync only:

user=sbk:022:10001:

As you can see enabled protocols in per-user setup are specified as 11000 (scp, sftp), 11111 (scp, sftp, cvs, rdist, rsync) or 00000 (no protocols enabled). 022 in above examples specifies umask.

Testing

Let’s assume you’ve created user1 and enabled only scp and rsync using rssh. An attempt to access the server via SSH under user1 account will end with the following output:

artiomix$ ssh user1@1.2.3.4
user1@1.2.3.4's password: 
 
This account is restricted by rssh.
Allowed commands: scp rsync
 
If you believe this is in error, please contact your system administrator.
 
Connection to 1.2.3.4 closed.

At the same time scp transfers will work without problems:

artiomix$ scp -P 23451 /etc/test.file user1@1.2.3.4:/tmp
user1@1.2.3.4's password:
test.file                             100%  983     1.0KB/s   00:00

Further Reading

rssh support chroot environments for scp, rsync and other transfer protocols. It means that you can restrict users not only by command they can use but also by filesystems they reach. For example, user1 can be chrooted to /chroot_user1 so it can’t be used to copy something from /etc or /var/www directories of the server. Here is nice manual about chroot in rssh.

Failover and Load Balancing using HAProxy

HAProxy sample topology

HAProxy is open source proxy that can be used to enable high availability and load balancing for web applications. It was designed especially for high load projects so it is very fast and predictable, HAProxy is based on single-process model.

In this post I’ll describe sample setup of HAProxy: users’ requests are load balanced between two web servers Web1 and Web1, if one of them goes down then all the request are processed by alive server, once dead servers recovers load balancing enables again. See topology to the right.

Installation

HAProxy is included into repositories for major Linux distributions, so if you’re using Centos, Redhat or Fedora type the following command:

yum install haproxy

If you’re Ubuntu, Debian or Linux Mint user use this one instead:

apt-get install haproxy

Configuration

As soon as HAProxy is installed it’s time to edit its configuration file, usually it’s placed in /etc/haproxy/haproxy.cfg. Official documentation for HAProxy 1.4 (stable) is here.

Here is configuration file to implement setup shown at the diagram and described above:

global
        user daemon
        group daemon
        daemon
        log 127.0.0.1 daemon
 
listen http
        bind 1.2.3.4:80
        mode http
        option tcplog
 
        log global
        option dontlognull
 
        balance roundrobin
        clitimeout 60000
        srvtimeout 60000
        contimeout 5000
        retries 3
        server web1 web1.example.com:80 check
        server web2 web2.example.com:80 check
        cookie web1 insert nocache
        cookie web2 insert nocache

Let’s stop on most important parts of this configuration file. Section global specifies user and group which will be used to run haproxy process (daemon in our example). Line daemon tells HAProxy to run in background, log 127.0.0.1 daemon specifies syslog facility for sending logs from HAProxy.

Section listen http contains line bind 1.2.3.4:80 that specifies IP address and port that will be used to accept users’ requests (they will be load balanced between Web1 and Web2). Line mode http means that HAProxy will filter all requests different from HTTP and will do load balancing over HTTP protocol.

Line balance roundrobin specifies load balancing algorithm according to which each web server (Web1 and Web2) will be used in turns according to their weights. In our example weights for both servers are the same so load balancing is fair.

Lines server web1 … and server web2 … specify web servers for load balancing and failover, in our case they are load balanced according to round robin algorithm and have the same priority/weight.

The last two lines in configuration files are optional, they makes it possible to preserve cookies, it means for example that if you logged in to web application hosted at Web1 and then HAProxy forwarded your next request to Web2 you will still have logged in session opened as cookies with session id from Web1 will be sent to you from Web2 as well.

Top 5 Password Managers for Linux [Guest Post]

In this post you will find set of password managers for Linux which provides secure storage for your passwords for sensitive data. If you still keep the passwords in plain text then you must consider one of available password managers so this article is for you.

KeePassX

KeePassX has been a very popular and famous password manager for Linux for a very long time and still trusted by pretty big number of users. When user launches the KeePassX password manager first it requires to set up of a master password to add an extra layer of security to password storage. As an option you can use a file with encryption key instead of the password. This key file can be used along with the master password to provide stronger security. KeePassX application is rather simple so you can easily create one or more databases which will have a master password and will contain all the login credentials stored encrypted. This manager is considered to be one of the most secure managers. If you’re Ubuntu user just type in terminal the following command:

sudo apt-get install keepassx

GPassword Manager

Gpassword Manager (GPM) is also one of the most secure and highly rated password managers which have more friendly and easy to use interface that KeePassX. This utility has many features that make it to be a good choice for most of the high level computer users. This password manager allows to set and add favorites into system-tray that is one of the unique features of this application. GPM utility uses the crypto++ method for encryption which can be used in Windows and Linux hence it enables the same database to be used on different platforms without the need to convert anything.

My Passwords

My Passwords is a simple and easy to use utility that allows you to store all your login credentials in an encrypted manner within a file. The most exciting feature of this utility are its speed and no requirement of an installation. Encryption algorithm that is used there is AES. Storage in Derby Database format along with AES encryption gives the user the power to create secure and fast password repository. The interface for this utility is fairly simple.

Fiagaro’s Password Manager 2

Fiagaro’s Password Manager 2 is another powerful tool with strong encryption methods that makes it one of the most secure utility for managing passwords in Linux. Fiagaro’s Password Manager 2 uses the AES-256 encryption of the database files which hold all your login credentials (it uses master password that should be set up once you started the program first).

Gringotts

Gringotts is rather old project: its application for Linux/Unix provides the user the possibility to store his or her notes in secure storage encrypted by symmetrical ciphers. Gringotts has a set of eight different algorithms that can be used to encrypt the desired data. This utility also provides different methods for hashing as well as compression. The interface of Gringotts is not as simple as of other password Managers but still easy to use and most effective for old school bearded Unix users.

About the author: Kelly Marsh is a blogger by profession. She loves writing on technology and luxury. Beside this she is fond of technology. Recently an article on Maruti Ritz attracted her attention. These days she is busy in writing an article on johnnie walker blue.

Grub Fallback: Boot good kernel if new one crashes

It’s hard to believe but I didn’t know about Grub fallback feature. So every time when I needed to reboot remote server into a new kernel I had to test it on local server to make sure it won’t panic on remote unit. And if kernel panic still happened I had to ask somebody who has physical access to the server to reboot the hardware choose proper kernel in Grub. It’s all boring and not healthful – it’s much better to use Grub’s native fallback feature.

Grub is default boot loader in most Linux distributions today, at least major distros like Centos/Fedora/RedHat, Debian/Ubuntu/Mint, Arch use Grub. This makes it possible to use Grub fallback feature just out of the box. Here is example scenario.

There is remote server hosted in New Zealand and you (sitting in Denmark) have access to it over the network only (no console server). In this case you cannot afford that the new kernel makes server unreachable, e.g. if new kernel crash during boot it won’t load network interface drivers so your Linux box won’t appear online until somebody reboots it into workable kernel. Thankfully Grub can be configured to try loading new kernel once and if it fails Grub will load another kernel according to configuration. You can see my example grub.conf below:

default=saved
timeout=5
splashimage=(hd0,1)/boot/grub/splash.xpm.gz
hiddenmenu
fallback 0 1
title Fedora OpenVZ (2.6.32-042stab053.5)
        root (hd0,1)
        kernel /boot/vmlinuz-2.6.32-042stab053.5 ro root=UUID=6fbdddf9-307c-49eb-83f5-ca1a4a63f584 rd_MD_UUID=1b9dc11a:d5a084b5:83f6d993:3366bbe4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=sv-latin1 rhgb quiet crashkernel=auto
        initrd /boot/initramfs-2.6.32-042stab053.5.img
        savedefault fallback
title Fedora (2.6.35.12-88.fc14.i686)
        root (hd0,1)
        kernel /boot/vmlinuz-2.6.35.12-88.fc14.i686 ro root=UUID=6fbdddf9-307c-49eb-83f5-ca1a4a63f584 rd_MD_UUID=1b9dc11a:d5a084b5:83f6d993:3366bbe4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=sv-latin1 rhgb quiet
        initrd /boot/initramfs-2.6.35.12-88.fc14.i686.img
        savedefault fallback

According to this configuration Grub will try to load ‘Fedora OpenVZ’ kernel once and if it fails system will be loaded into good ‘Fedora’ kernel. If ‘Fedora OpenVZ’ loads well you’ll be able to reach the server over the network after reboot. Notice lines ‘default=saved’ and ‘savedefault fallback’ which are mandatory to make fallback feature working.

Alternative way

I’ve heard that official Grub fallback feature may work incorrectly on RHEL5 (and Centos 5) so there is elegant workaround (found here):

1. Add param ‘panic=5′ to your new kernel line so it looks like below:

title Fedora OpenVZ (2.6.32-042stab053.5)
        root (hd0,1)
        kernel /boot/vmlinuz-2.6.32-042stab053.5 ro root=UUID=6fbdddf9-307c-49eb-83f5-ca1a4a63f584 rd_MD_UUID=1b9dc11a:d5a084b5:83f6d993:3366bbe4 rd_NO_LUKS rd_NO_LVM rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=sv-latin1 rhgb quiet crashkernel=auto panic=5
        initrd /boot/initramfs-2.6.32-042stab053.5.img

This param will make crashed kernel to reboot itself in 5 seconds.

2. Point default Grub param to good kernel, e.g. ‘default=0′.

3. Type in the following commands (good kernel appears in grub.conf first and new kernel is second one):

# grub
 
grub> savedefault --default=1 --once
savedefault --default=1 --once
grub> quit

This will make Grub to boot into new kernel once and if it fails it will load good kernel. Now you can reboot the server and make sure it will 100% appear online in a few minutes. I usually prefer native Grub fallback feature but if you see it doesn’t work for you it makes sense to try above mentioned workaround.

Why Mosh is better than SSH?

Mosh screenshot

Mosh (stands for Mobile Shell) is replacement of SSH for remote connections to Unix/Linux systems. It brings a few noticeable advantages over well known SSH connections. In brief, it’s faster and more responsive, especially on long delay and/or unreliable links.

Key benefits of Mosh

  • Stays connected if your IP is changed. Roaming feature of Mosh allows you to move between Internet connections and keep Mosh session online. For example, if your wifi connection changes IP you don’t need to reconnect.
  • Keeps session after loosing connection. For example, if you lost Internet connection for some time, or your laptop went offline due to exhausted battery – you’ll be able to pick up previously opened Mosh session easily.
  • No root rights needed to use Mosh. Unlike SSH Mosh server is not a daemon that needs to listen on specific port to accept incoming connections from clients. Mosh server and client are executables that could be run by ordinary user.
  • The same credentials for remote login. Mosh uses SSH for authorization so in order to open connection you need the same credentials as before.
  • Responsive Ctrl+C combination. Unlike SSH Mosh doesn’t fill up network buffers so even if you accidentally requested to output 100 MB file you’ll be able to hit Ctrl+C and stop it immediately.
  • Better for slow or lagged links. Have you ever tried to use SSH on satellite link where average RTT is 600 ms or more? Wish Mosh you don’t need to wait until server replies to see your typing. It works in CLI and such programs as vi or emacs so on it makes it possible to do the job slow connections more comfortably.

Well, there are some disadvantages too:

  • No IPv6 support.
  • UTF-8 only.

Mosh is available for all major Linux distributions, FreeBSD and Mac OS X systems:

Ubuntu (12.04 LTS) or Debian (testing/unstable):

sudo apt-get install mosh

Gentoo:

emerge net-misc/mosh

Arch Linux:

packer -S mobile-shell-git

FreeBSD:

portmaster net/mosh

Mac OS X:

<a  class="colorbox" href="https://github.com/downloads/keithw/mosh/mosh-1.1.3-2.pkg">mosh-1.1.3-2.pkg</a>

Sources: mosh-1.1.3.tar.gz

Project’s website

P.S. It’s better that combination of SSH and GNU Screen.

Add physical NIC to XenServer

If you add new physical network interface to the hardware that runs XenServer it won’t appear in XenCenter by default.

In order to attach it to VMs or change its settings you’ll need to type in a few commands to XenServer’s CLI.

1. Connect XenServer via SSH using root rights:

ssh root@192.168.10.1 -v

2. Make sure that new NIC is attached to hardware and detected by Linux, in below command’s output you can see there are three Ethernet controllers (the last one was just attached to hardware):

[root@localhost ~]# lspci  | grep -i ethernet
10:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 01)
1e:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5723 Gigabit Ethernet PCIe (rev 10)
30:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10)

As you can see this NIC isn’t shown in XenCenter and below command doesn’t show its UID among detected interfaces:

root@localhost ~]# xe pif-list
uuid ( RO)                  : 095abcc1-4d64-7925-200f-a91d558ec872
                device ( RO): eth1
    currently-attached ( RO): true
                  VLAN ( RO): -1
          network-uuid ( RO): 9da74476-ffcb-6824-25ad-62d46f34e252
 
uuid ( RO)                  : 555844b2-4061-47e0-52ef-01e42f182eef
                device ( RO): eth0
    currently-attached ( RO): true
                  VLAN ( RO): -1
          network-uuid ( RO): 90a0e347-9246-7ac9-c939-30983602c14e

As well as no new eth2 in ifconfig’s output

[root@localhost ~]# ifconfig     
eth0      Link encap:Ethernet  HWaddr 68:B5:99:E3:1C:56  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1953 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2475 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:201110 (196.3 KiB)  TX bytes:1929408 (1.8 MiB)
          Interrupt:19 
 
eth1      Link encap:Ethernet  HWaddr 00:30:4F:33:43:6E  
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:110 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:14435 (14.0 KiB)  TX bytes:0 (0.0 b)
          Interrupt:17 Base address:0xe000
[root@localhost ~]# ifconfig eth2
ifconfig: interface eth2 does not exist

3. Solution is pretty easy – you just need to find out UUID of XenServer host to which you’d like to attach new NIC. You can do it by the following commands:

[root@localhost ~]# xe host-list 
uuid ( RO)                : c5ab0df3-440a-4164-b1a4-6febf1ff0052
          name-label ( RW): XenServer HP Proliant ML 110
    name-description ( RW): Default install of XenServer

and

[root@localhost ~]# xe pif-scan host-uuid=c5ab0df3-440a-4164-b1a4-6febf1ff0052

That’s it, from now you’ll see new NIC in XenCenter.

[root@localhost ~]# xe pif-list
uuid ( RO)                  : 095abcc1-4d64-7925-200f-a91d558ec872
                device ( RO): eth1
    currently-attached ( RO): true
                  VLAN ( RO): -1
          network-uuid ( RO): 9da74476-ffcb-6824-25ad-62d46f34e252
 
uuid ( RO)                  : 555844b2-4061-47e0-52ef-01e42f182eef
                device ( RO): eth0
    currently-attached ( RO): true
                  VLAN ( RO): -1
          network-uuid ( RO): 90a0e347-9246-7ac9-c939-30983602c14e
 
uuid ( RO)                  : 7f3b59d7-1508-835a-b268-4476bbac33d5
                device ( RO): eth2
    currently-attached ( RO): false
                  VLAN ( RO): -1
          network-uuid ( RO): 9584917b-e49a-f075-f1e0-8ba2c4a4bf02