Archive for the ‘ Testing ’ Category

OVH Setup server hybrid second disk array

As OVH doesn’t provide a guide to install the second disk array, which is sold optionally, I will publish my solution.

Please be careful, only try these on test configurations and without any disks containing critical data. Some commands can destroy all of your data.

There seems to be soon a solution within the manager – but for now this isn’t working:

Tested on Ubuntu 14.04

Some commands to detect your actual configuration:

    fdisk -l
    lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
    df -h -x devtmpfs -x tmpfs

Delete sdb1/sda1 with parted:

    parted /dev/sdb
    (parted) print
    Model: ATA HGST HUS726040AL (scsi)
    Disk /dev/sdb: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
 
    Number  Start   End    Size   File system  Name     Flags
     1      1049kB  537MB  536MB               primary  boot
 
    (parted) rm 1
    (parted) quit
 
 
    parted /dev/sda
    (parted) print
    Model: ATA HGST HUS726040AL (scsi)
    Disk /dev/sda: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
 
    Number  Start   End    Size   File system  Name     Flags
     1      1049kB  537MB  536MB               primary  boot
 
    (parted) rm 1
    (parted) quit

Add new raid partition (needs to be done for sdb and sda):

    # Create raid 1 in live linux system
    parted -a optimal /dev/sdb
    # Place a flag gpt or mbr
    mklabel gpt
    # Create partition
    mkpart primary ext4 0% 100%
    # Mark partition as software raid partition
    set 1 raid on
    # Verify its alligned
    align-check
    optimal
    # Show results
    print
 
    # Create raid 1 in live linux system
    parted -a optimal /dev/sda
    # Place a flag gpt or mbr
    mklabel gpt
    # Create partition
    mkpart primary ext4 0% 100%
    # Mark partition as software raid partition
    set 1 raid on
    # Verify its alligned
    align-check
    optimal
    # Show results
    print

Create new raid configuration (‘level’ can be used for RAID0/1/5 …):

    mdadm --create --verbose /dev/md4 --level=0 --assume-clean --raid-devices=2 /dev/sdb1 /dev/sda1
    cat /proc/mdstat

In case of renaming:

    # Delete all and rescan
    mdadm -Ss
    mdadm --assemble --verbose /dev/md4 /dev/sdb1 /dev/sda1

Update mdadm configuration:

    # NOT NECCESSAIRE MAYBY USEFUL
    # mdadm --monitor --daemonise /dev/md4
 
    # Capture output
    mdadm --detail --scan
    # Something like: 'ARRAY /dev/md4 UUID=7d45838b:7886c766:5802452c:653f8cca'
    # Needs to be added to the end of file:
    /etc/mdadm/mdadm.conf
 
    # Update initramfs (ignore errors):
    update-initramfs -v -u
 
    # Create file system: 
    mkfs.ext4 -F /dev/md4
 
    # Mount fs:
    mount /dev/md4 /opt/
 
    # Update fstab:
    /etc/fstab
    /dev/md4 	/opt	ext4	defaults 	0	1

Could look something like that:

    lsblk
 
    NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    sdb           8:16   0   3.7T  0 disk
    `-sdb1        8:17   0   3.7T  0 part
      `-md4       9:4    0   7.3T  0 raid0 /opt
    nvme1n1     259:0    0 419.2G  0 disk
    |-nvme1n1p3 259:3    0   5.4G  0 part  [SWAP]
    |-nvme1n1p1 259:1    0   511M  0 part
    `-nvme1n1p2 259:2    0 413.3G  0 part
      `-md2       9:2    0 413.3G  0 raid1 /
    sda           8:0    0   3.7T  0 disk
    `-sda1        8:1    0   3.7T  0 part
      `-md4       9:4    0   7.3T  0 raid0 /opt
    nvme0n1     259:4    0 419.2G  0 disk
    |-nvme0n1p3 259:7    0   5.4G  0 part  [SWAP]
    |-nvme0n1p1 259:5    0   511M  0 part  /boot/efi
    `-nvme0n1p2 259:6    0 413.3G  0 part
      `-md2       9:2    0 413.3G  0 raid1 /

Now you can reboot the server and verify your configuration.

https://doc.ubuntu-fr.org/raid_logiciel
https://www.psylogical.org/node/198
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04
https://github.com/etalab/etalab-support/tree/master/procedures
https://docs.ovh.com/fr/dedicated/raid-soft/

How To Setup VNC For Ubuntu 12

via How To Setup VNC For Ubuntu 12 | DigitalOcean.

Ubuntu XFCE Vnc

Background

VNC stands for Virtual Network Computing, which allows you to connect to your server remotely, and be able to use your keyboard, mouse, and monitor to interface with that server.

Step 1 – Install VNC server and XFCE 4 desktop.

To get started, we will install a VNC server on Ubuntu 12.10 x64 Server droplet. Login as root and install packages:

apt-get -y install ubuntu-desktop tightvncserver xfce4 xfce4-goodies

 

Step 2 – Add a VNC user and set its password.

adduser vnc
passwd vnc

If you would like to get root as user vnc you would have to add it to sudoers file. Make sure you are logged in as root:

echo "vnc ALL=(ALL) ALL" >> /etc/sudoers

Set user vnc’s VNC Server password:

su - vnc
vncpasswd
exit

This step sets the VNC password for user ‘vnc’. It will be used later when you connect to your VNC server with a VNC client:

Now you can login as user ‘vnc’ and obtain root by running ‘sudo su -‘ and entering your password:

Step 3 – Install VNC As A Service

Login as root and edit /etc/init.d/vncserver and add the following lines:

#!/bin/bash
PATH="$PATH:/usr/bin/"
export USER="vnc"
DISPLAY="1"
DEPTH="16"
GEOMETRY="1024x768"
OPTIONS="-depth ${DEPTH} -geometry ${GEOMETRY} :${DISPLAY}"
. /lib/lsb/init-functions
 
case "$1" in
start)
log_action_begin_msg "Starting vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vncserver ${OPTIONS}"
;;
 
stop)
log_action_begin_msg "Stoping vncserver for user '${USER}' on localhost:${DISPLAY}"
su ${USER} -c "/usr/bin/vncserver -kill :${DISPLAY}"
;;
 
restart)
$0 stop
$0 start
;;
esac
exit 0

Edit /home/vnc/.vnc/xstartup and replace with:

#!/bin/sh
xrdb $HOME/.Xresources
xsetroot -solid grey
startxfce4 &

Update file permissions and allow any user to start X Server:

chown -R vnc. /home/vnc/.vnc && chmod +x /home/vnc/.vnc/xstartup
sed -i 's/allowed_users.*/allowed_users=anybody/g' /etc/X11/Xwrapper.config

Make /etc/init.d/vncserver executable and start VNC server:

chmod +x /etc/init.d/vncserver && service vncserver start

Add your VNC server to automatically start on reboot:

update-rc.d vncserver defaults

 

Step 4 – Connect to your VNC server with TightVNC / Chicken of the VNC

yourserver:5901

NFS cluster status and HighlyAvailableNFS

While working on an NFS cluster setup, I stumbled upon these two articles which are maybe helpful for someone:

http://billharlan.com/pub/papers/NFS_for_clusters.html

Saturated network?

$ time dd if=/dev/zero of=testfile bs=4k count=8182
  8182+0 records in
  8182+0 records out
  real    0m8.829s
  user    0m0.000s
  sys     0m0.160s

 

First exercise your disk with your own code or with a simple write operation like writing files should be enough to test network saturation. When profiling reads instead of writes, call umount and mount to flush caches, or the read will seem instantaneous:

$ cd /
$ umount /mnt/test
$ mount /mnt/test
$ cd /mnt/test
$ dd if=testfile of=/dev/null bs=4k count=8192

Check for failures on a client machine with:

  $ nfsstat -c
or
  $ nfsstat -o rpc

If more than 3% of calls are retransmitted, then there are problems with the network or NFS server. Look for NFS failures on a shared disk server with:

  $ nfsstat -s
or
  $ nfsstat -o rpc

It is not unreasonable to expect 0 badcalls. You should have very few “badcalls” out of the total number of “calls.”

Lost packets

NFS must resend packets that are lost by a busy host. Look for permanently lost packets on the disk server with:

$ head -2 /proc/net/snmp | cut -d' ' -f17
  ReasmFails
  2

If you can see this number increasing during nfs activity, then you are losing packets. You can reduce the number of lost packets on the server by increasing the buffer size for fragmented packets:

$ echo 524288 > /proc/sys/net/ipv4/ipfrag_low_thresh
$ echo 524288 > /proc/sys/net/ipv4/ipfrag_high_thresh

This is about double the default.

Server threads

See if your server is receiving too many overlapping requests with:

$ grep th /proc/net/rpc/nfsd
  th 8 594 3733.140 83.850 96.660 0.000 73.510 30.560 16.330 2.380 0.000 2.150

The first number is the number of threads available for servicing requests, and the the second number is the number of times that all threads have been needed. The remaining 10 numbers are a histogram showing how many seconds a certain fraction of the threads have been busy, starting with less than 10% of the threads and ending with more than 90% of the threads. If the last few numbers have accumulated a significant amount of time, then your server probably needs more threads.
Increase the number of threads used by the server to 16 by changing RPCNFSDCOUNT=16 in /etc/rc.d/init.d/nfs

Invisible or stale files

If separate clients are sharing information through NFS disks, then you have special problems. You may delete a file on one client node and cause a different client to get a stale file handle. Different clients may have cached inconsistent versions of the same file. A single client may even create a file or directory and be unable to see it immediately. If these problems sound familiar, then you may want to adjust NFS caching parameters and code multiple attempts in your applications.

 

https://help.ubuntu.com/community/HighlyAvailableNFS

Introduction

 

In this tutorial we will set up a highly available server providing NFS services to clients. Should a server become unavailable, services provided by our cluster will continue to be available to users.

Our highly available system will resemble the following: drbd.jpg