Archive for the ‘ Bash ’ Category

OVH Setup server hybrid second disk array

As OVH doesn’t provide a guide to install the second disk array, which is sold optionally, I will publish my solution.

Please be careful, only try these on test configurations and without any disks containing critical data. Some commands can destroy all of your data.

There seems to be soon a solution within the manager – but for now this isn’t working:

Tested on Ubuntu 14.04

Some commands to detect your actual configuration:

    fdisk -l
    lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
    df -h -x devtmpfs -x tmpfs

Delete sdb1/sda1 with parted:

    parted /dev/sdb
    (parted) print
    Model: ATA HGST HUS726040AL (scsi)
    Disk /dev/sdb: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
 
    Number  Start   End    Size   File system  Name     Flags
     1      1049kB  537MB  536MB               primary  boot
 
    (parted) rm 1
    (parted) quit
 
 
    parted /dev/sda
    (parted) print
    Model: ATA HGST HUS726040AL (scsi)
    Disk /dev/sda: 4001GB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
 
    Number  Start   End    Size   File system  Name     Flags
     1      1049kB  537MB  536MB               primary  boot
 
    (parted) rm 1
    (parted) quit

Add new raid partition (needs to be done for sdb and sda):

    # Create raid 1 in live linux system
    parted -a optimal /dev/sdb
    # Place a flag gpt or mbr
    mklabel gpt
    # Create partition
    mkpart primary ext4 0% 100%
    # Mark partition as software raid partition
    set 1 raid on
    # Verify its alligned
    align-check
    optimal
    # Show results
    print
 
    # Create raid 1 in live linux system
    parted -a optimal /dev/sda
    # Place a flag gpt or mbr
    mklabel gpt
    # Create partition
    mkpart primary ext4 0% 100%
    # Mark partition as software raid partition
    set 1 raid on
    # Verify its alligned
    align-check
    optimal
    # Show results
    print

Create new raid configuration (‘level’ can be used for RAID0/1/5 …):

    mdadm --create --verbose /dev/md4 --level=0 --assume-clean --raid-devices=2 /dev/sdb1 /dev/sda1
    cat /proc/mdstat

In case of renaming:

    # Delete all and rescan
    mdadm -Ss
    mdadm --assemble --verbose /dev/md4 /dev/sdb1 /dev/sda1

Update mdadm configuration:

    # NOT NECCESSAIRE MAYBY USEFUL
    # mdadm --monitor --daemonise /dev/md4
 
    # Capture output
    mdadm --detail --scan
    # Something like: 'ARRAY /dev/md4 UUID=7d45838b:7886c766:5802452c:653f8cca'
    # Needs to be added to the end of file:
    /etc/mdadm/mdadm.conf
 
    # Update initramfs (ignore errors):
    update-initramfs -v -u
 
    # Create file system: 
    mkfs.ext4 -F /dev/md4
 
    # Mount fs:
    mount /dev/md4 /opt/
 
    # Update fstab:
    /etc/fstab
    /dev/md4 	/opt	ext4	defaults 	0	1

Could look something like that:

    lsblk
 
    NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
    sdb           8:16   0   3.7T  0 disk
    `-sdb1        8:17   0   3.7T  0 part
      `-md4       9:4    0   7.3T  0 raid0 /opt
    nvme1n1     259:0    0 419.2G  0 disk
    |-nvme1n1p3 259:3    0   5.4G  0 part  [SWAP]
    |-nvme1n1p1 259:1    0   511M  0 part
    `-nvme1n1p2 259:2    0 413.3G  0 part
      `-md2       9:2    0 413.3G  0 raid1 /
    sda           8:0    0   3.7T  0 disk
    `-sda1        8:1    0   3.7T  0 part
      `-md4       9:4    0   7.3T  0 raid0 /opt
    nvme0n1     259:4    0 419.2G  0 disk
    |-nvme0n1p3 259:7    0   5.4G  0 part  [SWAP]
    |-nvme0n1p1 259:5    0   511M  0 part  /boot/efi
    `-nvme0n1p2 259:6    0 413.3G  0 part
      `-md2       9:2    0 413.3G  0 raid1 /

Now you can reboot the server and verify your configuration.

https://doc.ubuntu-fr.org/raid_logiciel
https://www.psylogical.org/node/198
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04
https://github.com/etalab/etalab-support/tree/master/procedures
https://docs.ovh.com/fr/dedicated/raid-soft/

Preseed apt_get for unattended_installs

Installation of some packages require user input, which breaks the entire concept of “unattended” installs. Here’s a quick fix to get around that.

First, on a setup computer, install the following:

sudo apt-get install debconf-utils

Now, install whatever packages you wish to later install in an unattended mode. Answer the questions for installation appropriately when prompted. Next we will export those answers into a “seed” file that we can use when installing the package on a new machine. For instance, I’ve installed several ldap authentication packages, so I might want to grab all of the settings those packages ask for.

sudo debconf-get-selections | grep ldap > ldap.seed

If you remember from yesterday, we created an archive that included a setup script and several config files. Place the ldap.seed file inside that archive along with the other scripts, and just before doing the apt-get install add the following line to the add2network.sh file:

debconf-set-selections ./ldap.seed

And there you have it – your apt-get won’t ask for details anymore.

Source: ossramblings

s3ql – a full-featured file system for online data storage

I am really impressed by my first look at s3ql. A really complete, just working and well documented tool to mount aws s3 and other cloud storage solution on a dedicated server:

S3QL is a file system that stores all its data online using storage services like Google Storage, Amazon S3 or OpenStack. S3QL effectively provides a hard disk of dynamic, infinite capacity that can be accessed from any computer with internet access running Linux, FreeBSD or OS-X.

S3QL is a standard conforming, full featured UNIX file system that is conceptually indistinguishable from any local file system. Furthermore, S3QL has additional features like compression, encryption, data de-duplication, immutable trees and snapshotting which make it especially suitable for online backup and archival.

S3QL is designed to favor simplicity and elegance over performance and feature-creep. Care has been taken to make the source code as readable and serviceable as possible. Solid error detection and error handling have been included from the very first line, and S3QL comes with extensive automated test cases for all its components.

Features

  • Transparency. Conceptually, S3QL is indistinguishable from a local file system. For example, it supports hardlinks, symlinks, ACLs and standard unix permissions, extended attributes and file sizes up to 2 TB.
  • Dynamic Size. The size of an S3QL file system grows and shrinks dynamically as required.
  • Compression. Before storage, all data may compressed with the LZMA, bzip2 or deflate (gzip) algorithm.
  • Encryption. After compression (but before upload), all data can AES encrypted with a 256 bit key. An additional SHA256 HMAC checksum is used to protect the data against manipulation.
  • Data De-duplication. If several files have identical contents, the redundant data will be stored only once. This works across all files stored in the file system, and also if only some parts of the files are identical while other parts differ.
  • Immutable Trees. Directory trees can be made immutable, so that their contents can no longer be changed in any way whatsoever. This can be used to ensure that backups can not be modified after they have been made.
  • Copy-on-Write/Snapshotting. S3QL can replicate entire directory trees without using any additional storage space. Only if one of the copies is modified, the part of the data that has been modified will take up additional storage space. This can be used to create intelligent snapshots that preserve the state of a directory at different points in time using a minimum amount of space.
  • High Performance independent of network latency. All operations that do not write or read file contents (like creating directories or moving, renaming, and changing permissions of files and directories) are very fast because they are carried out without any network transactions. S3QL achieves this by saving the entire file and directory structure in a database. This database is locally cached and the remote copy updated asynchronously.
  • Support for low bandwidth connections. S3QL splits file contents into smaller blocks and caches blocks locally. This minimizes both the number of network transactions required for reading and writing data, and the amount of data that has to be transferred when only parts of a file are read or written.

 

http://code.google.com/p/s3ql/