I have been using Linux since the late 90's. At the time the only available harddisks were of rotating matter. My first own solid state disk came with a ThinkPad T400 around 2011. This particular laptop is still operating until today. Last 5 years I have been using it. Some parts of the memory have been pushed into RAM, or also called a RAMdisk:

user@t400 ~ % grep tmpfs /etc/fstab
tmpfs           /var/tmp/portage        tmpfs           size=6G,mode=0777       0 0
tmpfs           /tmp                    tmpfs           nodev,nosuid,size=4G    0 0

This has been done to push expensive I/O operations outside of the SSD. It works over here that way until today.

Some days ago I have been searching for a new SSD for a new machine, read articles about technical solutions used on solid state disks. Particularly I have been looking for a I/O throughput, bus bandwidth, NAND and TRIM support. While reading about TRIM I realised Linux support for TRIM has grown more and more over the years. Solid state disks arrived at mainstream. They are almost everywhere today.

However my own settings for /etc/fstab are still like at beginning of the millennium. Apart from the file systems that I have been changing over the years:

  • ext2
  • ext3
  • ext4
  • XFS

Nothing big has changed. The SSD special support has to be configured in Linux.

At first we need to find out if the used disk supports TRIM at all:

# hdparm -I /dev/sda | grep TRIM
*    Data Set Management TRIM supported (limit 8 blocks)

Here are some benchmarks that have been done BEFORE enabling TRIM support in Linux:

Test 1 using cached read timings

# hdparm -tT /dev/sda

/dev/sda:
  Timing cached reads:   15996 MB in  2.00 seconds = 8003.84 MB/sec
  Timing buffered disk reads: 1850 MB in  3.00 seconds = 616.22 MB/sec

Test 2 using O_DIRECT to bypass page cache for timings

# hdparm -tT --direct /dev/sda

/dev/sda:
  Timing O_DIRECT cached reads:   1220 MB in  2.00 seconds = 609.16 MB/sec
  Timing O_DIRECT disk reads: 1530 MB in  3.00 seconds = 509.91 MB/sec

This is a benchmark after adding the discard option to each partition.

Test 1:

# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   16730 MB in  2.00 seconds = 8371.31 MB/sec
 Timing buffered disk reads: 2240 MB in  3.00 seconds = 746.05 MB/sec

Test 2: # hdparm -tT --direct /dev/sda

/dev/sda:
 Timing O_DIRECT cached reads:   1032 MB in  2.00 seconds = 515.73 MB/sec
 Timing O_DIRECT disk reads: 2164 MB in  3.00 seconds = 720.86 MB/sec

There is a slight improvement in every section. And 25% improvement while looking at the O_DIRECT disk reads, More than 630 MB more throughput only by enabling the discard option per partition.

The 3-rd benchmark has been made after changing the noatime option in /etc/fstab to relatime.

Test 1:

# hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   16310 MB in  2.00 seconds = 8161.46 MB/sec
 Timing buffered disk reads: 2338 MB in  3.00 seconds = 778.97 MB/sec

Test 2: # hdparm -tT --direct /dev/sda

/dev/sda:
 Timing O_DIRECT cached reads:   1222 MB in  2.00 seconds = 610.47 MB/sec
 Timing O_DIRECT disk reads: 2186 MB in  3.00 seconds = 728.60 MB/sec

Those settings bring I/O improvements to running disks. Initally while creating partitions check its proper aligment with systemtools like fdisk or gparted

References: