AWS Mount EBS Volume to EC2

Amol Kokje
Level Up Coding
Published in
4 min readApr 15, 2020

--

AWS Mount EBS Volume to EC2

Recently I had to mount an EBS volume to my linux EC2 instance. Though AWS has a very detailed user guide, there were points where I got confused, and it did not always work. Hence, I wanted to write this small blog with all the steps and the output to use as reference for myself, and for others too. Hope you find it useful!

Firstly, stop the EC2 instance, and then in the Volumes section, select the volume and attach it to the instance (you may also choose to attach it to a running instance). Note that volume and instance have to be in the same AZ.

If the volume is new, it is exposed as a block device, and does not have a file system. So, if you are using the volume for the first time, please follow the steps below, or skip to step 3.

  1. Create file system on the volume. Say, volume is mounted on /dev/sdf (this is listed on EC2 console)
[ec2-user@ip-xxx-xx-x-xxx ~]$ sudo mkfs -t ext4 /dev/sdf
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680000 inodes, 2621440000 blocks
131072000 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
80000 block groups
32768 blocks per group, 32768 fragments per group
4096 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,2560000000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done

2. You can find out directory name where the volume is located using the “fdisk” command and finding the drive that matches the size of the volume OR using the “file -s” command to see where it is linked to.

Using the “fdisk” command. Here you can see that for a 10TiB volume, the drive is “/dev/nvme1n1”

[ec2-user@ip-xxx-xx-x-xxx ~]$ sudo fdisk -lDisk /dev/nvme1n1: 9.8 TiB, 10737418240000 bytes, 20971520000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/nvme2n1: 838.2 GiB, 900000000000 bytes, 1757812500 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/nvme3n1: 838.2 GiB, 900000000000 bytes, 1757812500 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/nvme0n1: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F502FA32–96BF-48E8-BDF0–166C1E74F8FA
Device Start End Sectors Size Type
/dev/nvme0n1p1 4096 16777182 16773087 8G Linux filesystem
/dev/nvme0n1p128 2048 4095 2048 1M BIOS boot

Using the “file -s” command and get the name of the drive. Here the volume is mounted on /dev/sdf.

[ec2-user@ip-xxx-xx-x-xxx ~]$ sudo file -s /dev/sdf
/dev/sdf: symbolic link to `nvme1n1'

3. Mount the drive to a folder location on the device.

[ec2-user@ip-xxx-xx-x-xxx ~]$ sudo mount /dev/nvme1n1 /data/ebsmount

4. The volume is mounted on the device, but if the device undergoes reboot, the mount is lost. To ensure the volume is mounted again on every reboot, need to update the /etc/fstab with device’s 128-bit UUID which persists throughout the life of partition. Get the UUID using the “blkid” command.

In this example, you can see that the UUID for /dev/nvme1n1 is “7fa6984f-4ae9–41b0-be30–912640826707” and type is “ext4”.

[ec2-user@ip-xxx-xx-x-xxx ~]$ sudo blkid
/dev/nvme1n1: UUID=”7fa6984f-4ae9–41b0-be30–912640826707" TYPE=”ext4"
/dev/nvme0n1: PTUUID=”f502fa32–96bf-48e8-bdf0–166c1e74f8fa” PTTYPE=”gpt”
/dev/nvme0n1p1: LABEL=”/” UUID=”a1e1011e-e38f-408e-878b-fed395b47ad6" TYPE=”xfs” PARTLABEL=”Linux” PARTUUID=”48273af3-b295–415e-8978-b786bf246692"
/dev/nvme0n1p128: PARTLABEL=”BIOS Boot Partition” PARTUUID=”48755cf9–8654–40ce-b91b-848334934b6c”

5. Create backup of the /etc/fstab(cp /etc/fstab /etc/fstab.orig) file and add the UUID and type of the partition in the file. It will look something like below after the update.

Note: If you ever boot your instance without this volume attached (for example, after moving the volume to another instance), the nofail mount option enables the instance to boot even if there are errors mounting the volume.

#
UUID=a1e1011e-e38f-408e-878b-fed395b47ad6 / xfs defaults,noatime 1 1
UUID=7fa6984f-4ae9–41b0-be30–912640826707 /data/ebsmount ext4 defaults,discard,nofail 0 0

6. Unmount all, and test mounting. If you receive an error message, there was an issue. If no error, then you would see output like this.

[ec2-user@ip-xxx-xx-x-xxx ~]$ sudo umount /data/ebsmount
[ec2-user@ip-xxx-xx-x-xxx ~]$ sudo mount -a — verbose
/ : ignored
/data/ebsount : successfully mounted

If you are thinking about automating the volume/drive mount procedure using UserData in CloudFormation or some code, as you can see the steps, it will not be that difficult. You can fuse a script in the instance AMI or just pass the code as UserData.

Good luck! :-)

--

--