Archive for the ‘ Performance ’ Category

How to change default I/O scheduler

How to change default I/O scheduler? | Planet Admon.

Red Hat Enterprise Linux 3 with a 2.4 kernel base uses a single, robust, general purpose I/O elevator. The I/O schedulers provided in Red Hat Enterprise Linux 4, embedded in the 2.6 kernel, have advanced the I/O capabilities of Linux significantly. With Red Hat Enterprise Linux 4, applications can now optimize the kernel I/O at boot time, by selecting one of four different I/O schedulers to accommodate different I/O usage patterns:

* Completely Fair Queuing—elevator=cfq (default)
* Deadline—elevator=deadline
* NOOP—elevator=noop
* Anticipatory—elevator=as

The I/O scheduler can be selected at boot time using the “elevator” kernel parameter. In the following example, the system has been configured to use the deadline scheduler in the grub.conf file.

title Red Hat Enterprise Linux Server (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/vg0/lv0 elevator=deadline
initrd /initrd-2.6.18-8.el5.img

In Red Hat Enterprise Linux 5, it is also possible to change the I/O scheduler for a particular disk on the fly.

# cat /sys/block/sdb/queue/scheduler
noop anticipatory deadline [cfq]
# echo ‘deadline’ > /sys/block/sdb/queue/scheduler
# cat /sys/block/sdb/queue/scheduler
noop anticipatory [deadline] cfq

The following are the tunable files for the deadline scheduler. They can be tuned to any suitable value according to hardware performance and software requirements:

/sys/block/DEVNAME/queue/iosched/read_expire
/sys/block/DEVNAME/queue/iosched/write_expire
/sys/block/DEVNAME/queue/iosched/fifo_batch
/sys/block/DEVNAME/queue/iosched/write_starved
/sys/block/DEVNAME/queue/iosched/front_merges

DEVNAME is the name of block device (such as sda, sdb, hda, etc)

A detailed description of the deadline I/O scheduler can be found at:
/usr/share/doc/kernel-[version]/Documentation/block/deadline-iosched.txt.

http://www.redhat.com/magazine/008jun05/features/schedulers/

http://www.redbooks.ibm.com/abstracts/redp4285.html

Random read performance per I/O elevator (synchronous)

Random read performance per I/O elevator (synchronous)

CPU utilization by I/O elevator (asynchronous)

CPU utilization by I/O elevator (asynchronous)

 Impact of nr_requests on the Deadline elevator (random write ReiserFS)

Impact of nr_requests on the Deadline elevator (random write ReiserFS)

Impact of nr_requests on the CFQ elevator (random write Ext3)

Impact of nr_requests on the CFQ elevator (random write Ext3)

Random write throughput comparison between Ext and ReiserFS (synchronous)

Random write throughput comparison between Ext and ReiserFS (synchronous)

 Random write throughput comparison between Ext3 and ReiserFS (asynchronous)

Random write throughput comparison between Ext3 and ReiserFS (asynchronous)

Disk I/O stats from /proc/diskstats

Disk I/O stats from /proc/diskstats

cat /proc/diskstats | grep 'sda '
   8    0 sda 2461810 61427 148062742 6482992 660009 1544934 67900384 45642376 0 7162961 52128751

Field 1 — # of reads issued
Field 2 — # of reads merged, field 6 — # of writes merged
Field 3 — # of sectors read
Field 4 — # of milliseconds spent reading
Field 5 — # of writes completed
Field 7 — # of sectors written
Field 8 — # of milliseconds spent writing
Field 9 — # of I/Os currently in progress
Field 10 — # of milliseconds spent doing I/Os
Field 11 — weighted # of milliseconds spent doing I/Os

Linux command line – measure disk performance

It can be quite annoying to clone your virtual LVM xen images and you notice that the host machine takes about 3 hours to clone a tiny 15Gb image.


A very handy tool, which is normally installed on RadHat / CentOS / Fedora machines is ‘hdparm’.
It gives you very quickly a slight idea of the disk performance.

hdparm -t /dev/drive

Clone xen LVM with ‘virt-clone’:

virt-clone -o existing_not_running_vm -n new_vm -f /dev/VolGroup00/new_vm --prompt