In this guide we will be installing Debian 9 (aka stretch) on a physical server with 4 disks. The role of this machice is to be used as a Storage/NAS system. We will create a software RAID 10 setup, with LVM and LUKS full disk encryption. Our goals:

* Install a Debian 9 with RAID10/LVM/LUKS.
* Secure SSH.
* Enable Firewall (UFW).
* Setup bonding with the two network cards.
* Setup remote system unlock with *dropbear* and *initramfs*.
* Setup disk-monitoring with *smartmontools* and *mdadm*.
* Setup *kexec* for faster reboots.

## Computer specs

* Type: HP ProLiant MicroServer
* CPU: AMD Turion(tm) II Neo N40L Dual-Core Processor
* RAM: 2GB
* Disks: 4x3TB SATA
* Network:
* Broadcom Limited NetXtreme BCM5723 Gigabit Ethernet PCIe (on board)
* Intel Corporation 82574L Gigabit Network Connection (extra)

## Assumptions

* Server IP: 192.168.1.10
* Netmask: 255.255.255.0
* Gateway IP: 192.168.1.1
* DNS IP: 192.168.1.1
* Hostname: storage.example.com

## Install Debian stretch

### Basic Settings

It would probably be more clear if there were screenshots for each step, but this was an installation on a physical server and taking photos for each step, opposes my laziness :). Just follow the instructions and you will be fine.

* Choose: **Install**
* Language: **English**
* Country: **other**
* Europe: **Cyprus**
* Country to base default locale settings: **United States**
* Keymap to use: **American English**
* Primary network interface: **enp3s0: Broadcom Limeted NetXtream BCM5723 Gigabit Ethernet PCIe**
* Let it get an IP from DHCP
* Hostname: **storage**
* Domain name: **example.com**
* Root password: **SomethingBigAndUnpredictable**
* Re-enter password to verify: **SomethingBigAndUnpredictable**
* Full name: **Sysadmin**
* Username: **admin**
* Choose a password for the new user: **AlsoSomethingBigAndUnpredictable**
* Re-enter password to verify: **AlsoSomethingBigAndUnpredictable**
* Select your time zone: **Asia/Nicosia**
* Partitioning method: **Manual**

Feel free to adjust the above according to your own preferences.

### Partitioning

There are 4 disks of 3TB each (3.0 TB SATA):

* SCSI1 (0,0,0) (sda)
* SCSI2 (0,0,0) (sdb)
* SCSI3 (0,0,0) (sdc)
* SCSI4 (0,0,0) (sdd)

Not really SCSI but SATA in fact.

#### Partition the devices

Then create a raid partition for */boot*:

* Select the free space of sda and ‘Enter’
* Create a new partition
* New partition size: **512 MB**
* Location of new partition: **Beginning**
* Use as: **physical volume for RAID**
* Done setting up the partition

Lastly create the raid partition to be used by the encrypted volume:

* Select the free space of sda and ‘Enter’
* Create a new partition
* New partition size: **3.0 TB**
* Location of new partition: **Beginning**
* Use as: **physical volume for RAID**
* Done setting up the partition

Repeat the above steps for sdb, sdc and sdd.

#### Setup Software RAID 10

First select ‘Configure software RAID’ and follow these steps:

* Write the changes to the storage devices and configure RAID? **Yes**

The we create the software RAID (MD) devices. First we create device *md0* for /boot:

* Create MD Device
* RAID10
* Number of active devices inn the RAID10 array: **4**
* Number of spare devices inn the RAID10 array: **0**
* Active devices for the RAID10 array (use ‘Space bar’ to select)
* **/dev/sda2**
* **/dev/sdb2**
* **/dev/sdc2**
* **/dev/sdd2**
* Press ‘Continue’ when done.

Then we create the software RAID device to be used for the encrypted volume (*md1*):

* Create MD Device
* RAID10
* Number of active devices inn the RAID10 array: **4**
* Number of spare devices inn the RAID10 array: **0**
* Active devices for the RAID10 array (use the ‘Space bar’ to select)
* **/dev/sda3**
* **/dev/sdb3**
* **/dev/sdc3**
* **/dev/sdd3**
* Press ‘Continue’ when done.

* Press ‘Finish’ when done.

#### Create the /boot volume

When done press ‘Finish partitioning and write changes to disk’.`

When finished you will see a ‘RAID10 device #0 1GB Software RAID device’:

* Select: #1 1.0GB
* Use as: **Ext4 journaling file system**
* Mount point: **/boot**
* Done setting up the partition

#### Setup the encrypted volume

We will be using the software RAID */dev/md1* device for the encrypted volume.

Now select ‘Configure encrypted volumes’ and follow these steps:

* Write the changes to disk and configure encrypted volumes? **Yes**

* Create encrypted volumes
* Select: */dev/md1*
* Erase data: **yes** (this will take a long time)
* Done setting up the partition

* Write the changes to disk and configure encrypted volumes? **Yes**
* Finish
* Encryption passphrase: **MyVeryLongEncryptionPassphrase**
* Re-enter the passphrase to verify: **MyVeryLongEncryptionPassphrase**

#### Setup LVM

Next we select the ‘Configure the Logical Volume Manager’ option and follow these steps:

* Write the changes to disks and configure LVM? **Yes**

* Create volume group
* Volume group name: **VG00**
* Devices for the new volume group:
* /dev/mapper/md1_crypt

Then we create the Logical Volumes (LV). First let’s create a SWAP volume:

* Create logical volume
* Volume group: **VG00**
* Logical volume name: **SWAP**
* Logical volume size: **2048MB** (2G is more than enough for this system)

Lastly we create the system (ROOT) volume. On an enterprise installation we may want to use different volumes for /usr, /home, /var, etc but for a home installation we will be fine to use just one.

* Create logical volume
* Volume group: **VG00**
* Logical volume name: **ROOT**
* Logical volume size: **5996818MB** (All available space)

Press ‘Finish’ when done.

### Start the installation

After all the steps are completed these Logical Volumes will be present on the system:

* LVM VG VG00, LV ROOT 0 6.0 TB
* LVM VG VG00, LV SWAP 0 2.0 GB

#### Create the ROOT filesystem

Under the ‘LVM VG VG00, LV ROOT 0 6.0 TB’ line select the ‘#1 6.0TB’ option:

* Use as: **Ext4 journaling file system**
* Mount point: **/**
* Done setting up this partition

#### Create the SWAP space

Under the ‘LVM VG VG00, LV SWAP 0 2.0 GB’ line select the ‘#1 2.0GB’ option:

* Use as: **swap area**
* Done setting up this partition

Now we are ready to write the changes and start the installation. Press the ‘Finish partitioning and write changes to disk’ option to continue:

* Write the changes to disks? **Yes**

Wait for the base install to finish. Then select a country close to you. No debian mirrors in Cyprus so I use UK:

* Debian archive mirror country: **United Kingdom**
* Debian archive mirror: **ftp.uk.debian.org**
* HTTP proxy: (none)

Wait for the APT configuration to Finish.

* Participate in the package usage survey: **no**

* Choose software to install:
* SSH server
* standard system utilities

Wait while software is installing

* Install the GRUB boot loader to the master boot record.
* Device for boot loader installation:
* **/dev/sda**

Wait for the installation to finish and reboot. Remember to remove the USB during the reboot cycle.

## Post install steps

During start-up you will see the ‘Please unlock md1_crypt’ prompt. Type your LUKS passphrase to unlock the disk and continue.

### Update and Upgrade

Login as *root*:

“`
# apt update && apt -y dist-upgrade
“`

### Install essential packages

“`
# apt -y install vim htop multitail ntp byobu ufw unattended-upgrades downtimed
“`

### Secure ssh

You need to generate an SSH key pair on you PC, if you don’t have one (you should!):

“`
$ ssh-keygen -b 4096
“`

Copy the public key:

“`
$ cat ~/.ssh/id_rsa.pub
“`

Paste the public key at the end of the */root/.ssh/authorized_keys* file in your server and try to login from your PC:

“`
$ ssh root@192.168.1.10
“`

Some final adjustments on your *SSH* config (*/etc/ssh/sshd_config*). Change these values:

“`
Port 2233
PasswordAuthentication no
“`

Restart *SSH*:

“`
# systemctl restart ssh.service
“`

### Enable the UFW firewall

We are using port 2233 for *SSH* so we need to allow that and enable the firewall:

“`
# ufw allow 2233/tcp
# ufw enable
“`

### Setup bonding

Since we have two ethernet cards, we may take advantage of thr Linux bonding feature and join them as one. We will be using the *Adaptive load balancing* mode which provides load balancing of transmit, load balancing of receive for IPv4 and requires no configuration from the switch side.

First we need to install *ifenslave*:

“`
# apt -y install ifenslave
“`

Set up this in */etc/network/interfaces*:

“`
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# enp2s0 is manually configured, and slave to the “bond0″ bonded NIC
auto enp2s0
iface eth0 inet manual
bond-master bond0

# enp3s01 is also manually configured, thus creating a 2-link bond.
auto enp3s0
iface eth1 inet manual
bond-master bond0

# bond0 is the bonded NIC and can be used like any other normal NIC.
# bond0 is configured using static network information.
auto bond0
iface bond0 inet static
address 192.168.1.10
gateway 192.168.1.1
netmask 255.255.255.0

# bond0 uses adaptive load balancing
bond-mode 6
bond-miimon 100
bond-slaves enp2s0 enp3s0
“`

An `ifup bond0` should bring the bonded interface up. Or you can just `reboot`.

### Setup remote-unlock with dropbear

The server will be a headless system, located in a difficult to access location. So we need a way to unlock it when a power failure occurs. The most convenient way to do this is to use a [mandos server](https://wiki.recompile.se/wiki/Mandos) but convenience comes at a [cost](https://www.recompile.se/mandos/man/intro.8mandos). A safer and easier way is to use dropbear during boot (initrd). The weak point of this solution is that the server will be basically offline until the sysadmin manually unlocks it, to boot.

First we install *dropbear* for *initrd*:

“`
# apt -y install dropbear-initramfs
“`

Then we set a custom ssh port for *dropbear*. This better be different than the custom ssh port we used earlier. Change the dropbear port to 2244 in /etc/dropbear-initramfs/config:

“`
DROPBEAR_OPTIONS=”-p 2244″
“`

Add the static IP in the initramtools configuration (*/etc/initramfs-tools/initramfs.conf*):

“`
IP=192.168.1.10::192.168.1.1:255.255.255.0:storage:enp3s0:off
“`

Copy the *authorized_keys* file in */etc/dropbear-initramfs*:

“`
# cp /root/.ssh/authorized_keys /etc/dropbear-initramfs/
“`

Regenerate the initrd file:

“`
# update-initramfs -u
“`

Now reboot and ssh to it to test it:

“`
$ ssh -p 2244 root@192.168.1.10
“`

If your pubkeys are in place you will enter a busybox shell. Enter the `crypt-unlock` command, supply your unlock passphrase and the system will boot to the encrypted system.

### Setup a local MTA for notifications

We will be using our main mailserver as a smarthost for mail to go through.

Install the *postfix* MTA and the *mail* utility:

“`
# apt -y install postfix mailutils
“`

Answer these questions:

* General type of mail configuration: **Internet with smarthost**
* System mail name: **storage.example.com**
* SMTP relay host (blank for none): **smtp.example.com**

Test it:

“`
# echo ‘Testing #1’ | mail -s ‘Test #1’ user@example.com
“`

If you get a mail in your mailbox then everything is set. If not extra configuration may be needed on the smarthost. Contact the sysadmin of the smarthost, or check the logs if you access to it.

### Setup pro-active disk monitoring

Install *smartmontools*:

“`
# apt -y install smartmontools
“`

Enable S.M.A.R.T, offline testing, attribute autosave, short and long test on all 4 devices. Add these lines in */etc/smartd.conf*:

“`
/dev/sda -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
/dev/sdb -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
/dev/sdc -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
/dev/sdd -a -d sat -o on -S on -s (S/../.././02|L/../../6/03) -m user@example.com -M exec /usr/share/smartmontools/smartd-runner
“`

Restart *smartmontools*:

“`
# systemctl restart smartmontools.service
“`

### Setup Software RAID10 monitoring

We also need to setup monitoring for the software raid. Add your email address in the */etc/mdadm/mdadm.conf* file:

“`
MAILADDR user@example.com
“`

Restart the *mdmonitor* service:

“`
# systemctl restart mdmonitor.service
“`

### Setup kexec for faster reboots

*Kexec* is a Linux kernel mechanism that can load a fresh kernel from a running system. This results in a “reboot” without in fact rebooting the computer. The system loads a new kernel, the system appears “rebooted” but skipping the BIOS?UEFI initialization, thus resulting in faster reboots.

Install *kexec-tools*:

“`
# apt -y install kexec-tools
“`

The ‘Should kexec-tools handle reboots (sysvinit only)?’ question is related only to *sysvinit* systems. Since we are using *systemd*, it has no effect in our case.

Now if you want to reboot instead of running `reboot` you can run `systemctl kexec`. The latter command will reboot the system without going though BIOS/UEFI, POST etc and your system downtime is minimized.

And we are done! Store your server in a protected location, add a UPS for power backup and you are ready.

References
———-
* https://wiki.debian.org/Bonding
* https://help.ubuntu.com/community/UbuntuBonding
* https://www.theo-andreou.org/?p=1579
* https://wiki.recompile.se/wiki/Mandos
* http://forums.ayksolutions.com/forum/documentation/knowledgebase/general-server-questions/641-proactively-monitoring-hard-drive-health-using-smartd

This is a guide about adding a new disk to an existing [LVM](http://en.wikipedia.org/wiki/Logical_Volume_Manager_%28Linux%29 “Logical Volume Manager”) [VG](http://en.wikipedia.org/wiki/Volume_group “Volume Group”)

Check current setup
——————-

1. First let’s check and document the current setup.

* Checking the ***/proc*** filesystem:

      # cat /proc/partitions 
      major minor  #blocks  name
8 0 7880544 sda 8 1 248832 sda1 8 2 1 sda2 8 5 7628800 sda5 252 0 6574080 dm-0 252 1 1036288 dm-1

* Using *`lsblk`*:

      # lsblk 
      NAME                   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda                      8:0    0  7,5G  0 disk 
      ├─sda1                   8:1    0  243M  0 part /boot
      ├─sda2                   8:2    0    1K  0 part 
      └─sda5                   8:5    0  7,3G  0 part 
        ├─ubuntu--vg-root   252:0    0  6,3G  0 lvm  /
        └─ubuntu--vg-swap_1 252:1    0 1012M  0 lvm  [SWAP]
      

2. Check and document the LVM layout.

* Volume Group info:

      # vgs
        VG         #PV #LV #SN Attr   VSize VFree 
        ubuntu-vg    1   2   0 wz--n- 7,27g 16,00m
      

* Physical Volume info:

      # pvs
        PV         VG         Fmt  Attr PSize PFree 
        /dev/sda5  ubuntu-vg  lvm2 a--  7,27g 16,00m
      

* Logical Volume info:

      # lvs
        LV     VG         Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
        root   ubuntu-vg  -wi-ao----    6,27g
        swap_1 ubuntu-vg  -wi-ao---- 1012,00m
      

Add a new physical or virtual disk
———————————-

Now add the new disk on your server. On VMs it is possible to add a new disk without powering off. On physical servers this can be possible too if the server comes with hot-swap functionality. Check your server specs first!

1. Check if the new disk is detected.

* You can use this command if you want your system to detect the new disk without rebooting:

      # for SCSI_HOST in /sys/class/scsi_host/* ; do echo "- - -" > $SCSI_HOST/scan ; done
      

The above command simply loops through the SCSI hosts under the ***/sys/class/scsi_host*** directory and sends the “**- – -**” string to them. This forces the SCSI hosts to detect the new disk that has been attached.

* Using the ***/proc*** filesystem:

      # cat /proc/partitions
      major minor  #blocks  name
8 0 7880544 sda 8 1 248832 sda1 8 2 1 sda2 8 5 7628800 sda5 8 16 31522680 sdb 252 0 6574080 dm-0 252 1 1036288 dm-1

The size of the disk in GB, is 30GB:

      # echo '31522680/1024/1024' | bc -l
      30.06237030029296875000
      

* Using *`lsblk`*:

      # lsblk 
      NAME                   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
      sda                      8:0    0  7,5G  0 disk 
      ├─sda1                   8:1    0  243M  0 part /boot
      ├─sda2                   8:2    0    1K  0 part 
      └─sda5                   8:5    0  7,3G  0 part 
        ├─ubuntu--vg-root   252:0    0  6,3G  0 lvm  /
        └─ubuntu--vg-swap_1 252:1    0 1012M  0 lvm  [SWAP]
      sdb                      8:16   0 30,1G  0 disk
      

Add the new disk to LVM Volume Group
————————————

1. Create a new partition on the new disk:

   # fdisk /dev/sdb
Welcome to fdisk (util-linux 2.25.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Device does not contain a recognized partition table. Created a new DOS disklabel with disk identifier 0xaea3ab78.
Command (m for help): n Partition type p primary (0 primary, 0 extended, 4 free) e extended (container for logical partitions) Select (default p):
Using default response p. Partition number (1-4, default 1): First sector (2048-63045359, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-63045359, default 63045359):
Created a new partition 1 of type 'Linux' and of size 30,1 GiB.
Command (m for help): t Selected partition 1 Hex code (type L to list all codes): 8e Changed type of partition 'Linux' to 'Linux LVM'.
Command (m for help): w The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks.

2. Verify the creation of a new partition:

   # fdisk -l /dev/sdb
Disk /dev/sdb: 30,1 GiB, 32279224320 bytes, 63045360 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xaea3ab78
Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 63045359 63043312 30,1G 8e Linux LVM

Add the new partition to the Volume Group
—————————————–

1. Extend the Volume Group by adding a new disk:

   # vgextend ubuntu-vg /dev/sdb1
     Physical volume "/dev/sdb1" successfully created
     Volume group "ubuntu-vg" successfully extended
   

2. Check the current free space of the Volume Group:

   # vgs
     VG        #PV #LV #SN Attr   VSize  VFree 
     ubuntu-vg   2   2   0 wz--n- 37,33g 30,07g
   

3. Verify the new Physical Volume:

   # pvs
     PV         VG        Fmt  Attr PSize  PFree 
     /dev/sda5  ubuntu-vg lvm2 a--   7,27g 16,00m
     /dev/sdb1  ubuntu-vg lvm2 a--  30,06g 30,06g
   

4. Check the state of the Logical Volumes:

   # lvs
     LV     VG        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
     root   ubuntu-vg -wi-ao---- 6,27g     
     swap_1 ubuntu-vg -wi-ao---- 1012,00m
   

Nothing changed yet, of course.

Resize the logical volume
————————-

1. Use the **`lvresize`** command to resize the root volume:

   # lvresize -L 30,07g /dev/ubuntu-vg/root 
     Rounding size to boundary between physical extents: 30,07 GiB
     Size of logical volume ubuntu-vg/root changed from 6,27 GiB (1605 extents) to 30,07 GiB (7698 extents).
     Logical volume root successfully resized
   

2. Verify the volume resize:

   # lvs
     LV     VG        Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
     root   ubuntu-vg -wi-ao----   30,07g
     swap_1 ubuntu-vg -wi-ao---- 1012,00m
   

The root volume is now at 30,07GB. Good.

Resize the filesystem
———————

1. Check the current filesystem size:

   # df -hT
   Filesystem                   Type      Size  Used Avail Use% Mounted on
   udev                         devtmpfs  485M     0  485M   0% /dev
   tmpfs                        tmpfs     100M  5,1M   95M   6% /run
   /dev/mapper/ubuntu--vg-root  ext4      6,1G  2,4G  3,4G  41% /
   tmpfs                        tmpfs     496M     0  496M   0% /dev/shm
   tmpfs                        tmpfs     5,0M  4,0K  5,0M   1% /run/lock
   tmpfs                        tmpfs     496M     0  496M   0% /sys/fs/cgroup
   /dev/sda1                    ext2      236M   40M  184M  18% /boot
   tmpfs                        tmpfs     100M   16K  100M   1% /run/user/1000
   

Still using the old size, as expected.

2. Resize the root filesystem:

   # resize2fs /dev/mapper/ubuntu--vg-root
   resize2fs 1.42.12 (29-Aug-2014)
   Filesystem at /dev/mapper/ubuntu--vg-root is mounted on /; on-line resizing required
   old_desc_blocks = 1, new_desc_blocks = 2
   The filesystem on /dev/mapper/ubuntu--vg-root is now 7882752 (4k) blocks long.
   

3. Verify the new size of the root filesystem:

   # df -hT
   Filesystem                   Type      Size  Used Avail Use% Mounted on
   udev                         devtmpfs  485M     0  485M   0% /dev
   tmpfs                        tmpfs     100M  5,1M   95M   6% /run
   /dev/mapper/ubuntu--vg-root  ext4       30G  2,4G   26G   9% /
   tmpfs                        tmpfs     496M     0  496M   0% /dev/shm
   tmpfs                        tmpfs     5,0M  4,0K  5,0M   1% /run/lock
   tmpfs                        tmpfs     496M     0  496M   0% /sys/fs/cgroup
   /dev/sda1                    ext2      236M   40M  184M  18% /boot
   tmpfs                        tmpfs     100M   16K  100M   1% /run/user/1000
   

So now we have a logical root volume that expands across multiple physical disks. Notice, however, that this is not a very solid setup, since the loss of one of the physical volumes can bring down the whole system along with your data. Thus make sure that you have a solid and tested backup procedure in place. The restore procedure should also be documented in every detail in your disaster recovery practices.

References
———-
* [1] http://tldp.org/HOWTO/LVM-HOWTO/index.html
* [2] http://wingloon.com/2013/05/07/how-to-detect-a-new-hard-disk-without-rebooting-vmware-linux-guest/

In this guide we examine how to increase the disk size of a linux VM, when the need arises.

> ***Note***
> *Make sure you backup everything you have on your system, before trying this guide. This is an advanced HOWTO and it can break your system, irrecoverably, if you make a critical mistake!*

This guide assumes that you are using the Linux Logical Volume Manager (LVM) to manage your storage. If you are new to the concept of LVM you can study the excellent [LVM HOWTO](http://tldp.org/HOWTO/LVM-HOWTO/ “LVM HOWTO”) from [The Linux Documentation Project](http://tldp.org/ “TLDP”) website.

Even though it may be possible to resize a Linux system without using LVM, an LVM setup is highly recommended. No matter if you are working on a physical or virtual machine, LVM is the preferred method of storage management in Linux, since it simplifies tasks related to storage, including volume resizing.

Another assumption is that the disk is using the legacy [MBR](http://en.wikipedia.org/wiki/Master_boot_record “Master Boot Record”) partition table format. But the guide can easily be adapted to disks using a [GPT](http://en.wikipedia.org/wiki/GUID_Partition_Table “GUID Partition Table”) format.

Increasing the size of the virtual disk
—————————————
In this guide we are using VMware but this section can be easily adapted to different virtualization systems.

1. **Before increasing the disk size, it is a good idea to consolidate the snapshots of your VM. Right click and go to:
*Snapshots* -> *Consolidate***:

![Consolidate Snapshots](/wp-content/uploads/2015/05/vm-increase-disk-1.png “Consolidate Snapshots”)

* Press ‘OK’ when asked to do so. When the confirmation dialog appears, press ‘Yes’:

![Confirm Consolidate](/wp-content/uploads/2015/05/vm-increase-disk-2.png “Confirm Consolidate”)
When the operation is completed (Check the ‘Recent Tasks’ pane) move to the next step.

2. **Right click on the VM again and go to *Edit Settings*. From here, choose the disk you wish to enlarge**:

![Enlarge Disk](/wp-content/uploads/2015/05/vm-increase-disk-3.png “Enlarge Disk”)
Change the size to your desired size and press OK. In my case I will change a 10G size hard disk to 65G. Press ‘OK’ when done.

Now we should move to our linux system.

Force Linux to detect the changes in the disk size
————————————————–

1. **Check the detected disk size**:

   # cat /proc/partitions
   major minor  #blocks  name
8 0 10485760 sda 8 1 248832 sda1 8 2 1 sda2 8 5 10233856 sda5 11 0 1048575 sr0 254 0 9760768 dm-0 254 1 471040 dm-1

As you can see the primary disk (sda) has a size of 10485760KB, which translates to 10GB:

   # echo '10485760/1024/1024' | bc -l
   10.00000000000000000000
   

2. **Find the SCSI subsystem buses**:

   # ls /sys/class/scsi_device/
   0:0:0:0  2:0:0:0
   

***0:0:0:0*** is the primary bus.

3. **Rescan for disk changes**:

   # echo 1 > /sys/class/scsi_device/0\:0\:0\:0/device/rescan
   

4. **Check the new size**:

   # cat /proc/partitions 
   major minor  #blocks  name
8 0 68157440 sda 8 1 248832 sda1 8 2 1 sda2 8 5 10233856 sda5 11 0 1048575 sr0 254 0 9760768 dm-0 254 1 471040 dm-1

The size is now 65G:

   # echo '68157440/1024/1024' | bc -l
   65.00000000000000000000
   

Resize the partition used by the LVM Physical Volume (PV)
———————————————————

1. **Check which partition is used by the PV**:

   # pvs
     PV         VG         Fmt  Attr PSize PFree
     /dev/sda5  myvgroup   lvm2 a--  9,76g    0 
   

So only the ***/dev/sda5*** partition is used by LVM.

2. **Backup the partition table**:

   # sfdisk -d /dev/sda > sda-part.mbr
   

Now you need to save that file elsewhere because when if partition table goes down the drain, you will have no way to access the partition table backup file. You could use `scp` to transfer the file on another system:

   # scp sda-part.mbr user@another-server:
   

If you need to restore the partition table you can use a recovery/live cd or usb like this:

   # scp user@another-server:sda-part.mbr
   # sfdisk /dev/sda < sda-part.mbr
   

> ***Note***
> *You can use* `sgdisk` *for disks with GPT tables.
Backup:* `sgfdisk -b sda-part.gpt /dev/sda`.
*Restore:* `sgfdisk -l sda-part.gpt /dev/sda`

3. **Resize the partition used by the PV**.

* Check the size of the partition:

      # sfdisk -d /dev/sda
      Warning: extended partition does not start at a cylinder boundary.
      DOS and Linux will interpret the contents differently.
      # partition table of /dev/sda
      unit: sectors
/dev/sda1 : start= 2048, size= 497664, Id=83, bootable /dev/sda2 : start= 501758, size= 20467714, Id= 5 /dev/sda3 : start= 0, size= 0, Id= 0 /dev/sda4 : start= 0, size= 0, Id= 0 /dev/sda5 : start= 501760, size= 20467712, Id=8e

* Mark down the details of the ***sda2*** and ***sda5*** partitions in the following table:

Partition | Start Sector | size in KB | size in Sectors
———- | ———— | ———– | —————
sda2 | 501758 | 10233857 | 20467714
sda5 | 501760 | 10233856 | 20467712

> ***Note***
> *Each Sector is 512 bytes. So the number of Sectors is double the number of KBytes (1024 Bytes). The logical* `sda5` *partition is 1KB (or 2 Sectors) smaller than the extended* `sda2` partition.*

* Calculate the sizes of the new partitions:

The total size of the ***sda*** disk is 68157440KB which translates to 136314880 Sectors. So the new size (in Sectors) of ***sda2*** would be:

      # echo 136314880-501758 | bc -l
      135813122
      

The size, in sectors, of ***sda5*** would be:

      # echo 136314880-501760 | bc -l
      135813120
      

According to the calculations above, the new table with the partition details would be:

Partition | Start Sector | size in KB | size in Sectors
———- | ———— | ———– | —————
sda2 | 501758 | 67906561 | 135813122
sda5 | 501760 | 67906560 | 135813120

* Resize the ***sda2*** (extended) and ***sda5*** partitions.

Copy the *sda-part.mbr* file to *sda-part-new.mbr* and make the following changes to *sda-part-new.mbr*:

      # partition table of /dev/sda
      unit: sectors
/dev/sda1 : start= 2048, size= 497664, Id=83, bootable /dev/sda2 : start= 501758, size= 135813122, Id= 5 /dev/sda3 : start= 0, size= 0, Id= 0 /dev/sda4 : start= 0, size= 0, Id= 0 /dev/sda5 : start= 501760, size= 135813120, Id=8e

Now apply these changes to the *MBR* using ***sfdisk***:

      # sfdisk --no-reread /dev/sda < sda-part-new.mbr
      

Ignore any warnings for now.

* Verify the new partition table:

      # sfdisk -d /dev/sda 
      Warning: extended partition does not start at a cylinder boundary.
      DOS and Linux will interpret the contents differently.
      # partition table of /dev/sda
      unit: sectors
/dev/sda1 : start= 2048, size= 497664, Id=83, bootable /dev/sda2 : start= 501758, size=135813122, Id= 5 /dev/sda3 : start= 0, size= 0, Id= 0 /dev/sda4 : start= 0, size= 0, Id= 0 /dev/sda5 : start= 501760, size=135813120, Id=8e

It looks correct.

* Verify that the linux kernel has been notified of the changes:

      # cat /proc/partitions 
      major minor  #blocks  name
8 0 68157440 sda 8 1 248832 sda1 8 2 1 sda2 8 5 10233856 sda5 11 0 1048575 sr0 254 0 9760768 dm-0 254 1 471040 dm-1

It looks like the system still sees the old partition size. You could use a utility like ***partprobre***, ***kpartx*** or even ***sfdisk*** to force the kernel to re-read the new partition table:

      # sfdisk -R /dev/sda
      BLKRRPART: Device or resource busy
      This disk is currently in use.
      

Alas if the partition is in use, the kernel will refuse to re-read the partition size. In that case just schedule a reboot and try again.

After the system reboot:

      # cat /proc/partitions 
      major minor  #blocks  name
8 0 68157440 sda 8 1 248832 sda1 8 2 1 sda2 8 5 67906560 sda5 11 0 1048575 sr0 254 0 9760768 dm-0 254 1 471040 dm-1

So the new size of the ***sda5*** partition is 64,76GB:

      # echo '67906560/1024/1024' | bc -l
      64.76074218750000000000
      

If the partition size has increased, we can move on to the next step.

Resize the Physical Volume (PV).
——————————–

1. **Check the size of the Physical Volume**:

   # pvs
     PV         VG        Fmt  Attr PSize PFree
     /dev/sda5  ubuntu-vg lvm2 a--  9,76g    0 
   

So the size of the PV is still 9,76GB.

2. **Resize the PV**:

   # pvresize /dev/sda5
     Physical volume "/dev/sda5" changed
     1 physical volume(s) resized / 0 physical volume(s) not resized
   

3. **Verify that the size is resized**:

   # pvs
     PV         VG        Fmt  Attr PSize  PFree 
     /dev/sda5  ubuntu-vg lvm2 a--  64,76g 55,00g
   

So the new size of the PV is 64,8GB.

Resize the logical volume.
————————–

1. **Check the current size of the logical volume (used for the root filesystem)**:

   # lvs
     LV     VG        Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
     root   ubuntu-vg -wi-ao--   9,31g                                           
     swap_1 ubuntu-vg -wi-ao-- 460,00m
   

The root volume is still at 9,3GB.

2. **Check the free space**:

   # vgs
     VG        #PV #LV #SN Attr   VSize  VFree 
     ubuntu-vg   1   2   0 wz--n- 64,76g 55,00g
   

3. **Resize the root logical volume**:

   # lvresize -L +55,00g /dev/mapper/ubuntu-vg-root
     Extending logical volume root to 64,31 GiB
     Logical volume root successfully resized
   

4. **Verify LV resize**:

   # lvs
   LV     VG        Attr     LSize   Pool Origin Data%  Move Log Copy%  Convert
   root   ubuntu-vg -wi-ao--  64,31g
   swap_1 ubuntu-vg -wi-ao-- 460,00m
   

The root logical volume size is now at 65,3GB

Resize the root filesystem.
—————————

1. **Check the current size of the root filesystem**:

   # df -hT
   Filesystem                  Type      Size  Used Avail Use% Mounted on
   rootfs                      rootfs    9,2G  2,2G  6,6G  25% /
   udev                        devtmpfs   10M     0   10M   0% /dev
   tmpfs                       tmpfs     101M  204K  101M   1% /run
   /dev/mapper/ubuntu-vg-root  ext4      9,2G  2,2G  6,6G  25% /
   tmpfs                       tmpfs     5,0M     0  5,0M   0% /run/lock
   tmpfs                       tmpfs     201M     0  201M   0% /run/shm
   /dev/sda1                   ext2      228M   18M  199M   9% /boot
   

So the root filesystem is still at 9,2GB.

2. **Resize the file system**:

   # resize2fs /dev/mapper/ubuntu-vg-root
   resize2fs 1.42.5 (29-Jul-2012)
   Filesystem at /dev/mapper/ubuntu-vg-root is mounted on /; on-line resizing required
   old_desc_blocks = 1, new_desc_blocks = 5
   Performing an on-line resize of /dev/mapper/ubuntu-vg-root to 16858112 (4k) blocks.
   The filesystem on /dev/mapper/ubuntu-vg-root is now 16858112 blocks long.
   

3. **Verify that the filesystem has been resized**:

   # df -hT
   Filesystem                  Type      Size  Used Avail Use% Mounted on
   rootfs                      rootfs     64G  2,2G   58G   4% /
   udev                        devtmpfs   10M     0   10M   0% /dev
   tmpfs                       tmpfs     101M  204K  101M   1% /run
   /dev/mapper/ubuntu-vg-root  ext4       64G  2,2G   58G   4% /
   tmpfs                       tmpfs     5,0M     0  5,0M   0% /run/lock
   tmpfs                       tmpfs     201M     0  201M   0% /run/shm
   /dev/sda1                   ext2      228M   18M  199M   9% /boot
   

So now you have 55GB of additional storage on your root partition, to satisfy your increasing storage needs.

References
———-
* [1] https://ma.ttias.be/increase-a-vmware-disk-size-vmdk-formatted-as-linux-lvm-without-rebooting/
* [2] http://gumptravels.blogspot.com/2009/05/using-sfdisk-to-backup-and-restore.html
* [3] http://askubuntu.com/questions/57908/how-can-i-quickly-copy-a-gpt-partition-scheme-from-one-hard-drive-to-another