What do you think this is?

Just thoughts of a restless mind...

Disk technologies and speed

I heard a lot about different disk technologies and how they are getting better all the time. So when I run out of space in one of my computers, the one I'm running my VMs on, I thought I would get the best technology supported.

What I currently have

My motherboard is a Gigabyte B450M-DS3H which supports NVMe. When I built the computer I got a [Kingston SSD SATA at 240G https://www.kingston.com/en/ssd/a400-solid-state-drive?partnum=SA400S37%2F240G] for the operating system. First time I needed some extra space, I just put in a Seagate 500GB SHDD (SATA) that I had replaced in my laptop and was lying around.

The second disk, the SHDD is not to be trusted. It is now 6 years old and -as all my electronic equipment- has a lot of wear. I use it for secondary VMs, the ones that I don't care if they die.

But overall, this is a disk that due to age should be avoided.

What is new

I decided to buy a new disk, because I wanted to test (nerd-speak for "fool around") several Linux distrros with their graphical user interface. This brings capacity issues, but also speed concerns. So I opted for a (budget) M.2 NVMe.

The disk I chose was a Kingston SSD M.2 NVMe at 1TB which should keep me for a couple of years, and should be fast enough for everything. Of course, that is what the theory said and being the geek I am, I had run my own test to see how much faster that new disk is.

Experiment description

This is by definition not a scientific measurement. You have excellent reviews sites and proper metrics to do that for you. I, on the other hand, wanted just to find out on my own.

The setup was simple. I have a Kali VM with a 40Gb virtual disk on Virtualbox. I moved the virtual disk from physical disk to physical disk, and measured the startup time and the login time. Kali has a 5 sec delay in Grub which I decided to remove. I started the machine twice, and only measured the second time. This is because I wanted to make sure that any caching anywhere would not affect the measurements. Now that I think about it, probably I should have restarted the host altogether between measurements.

Results

Once again I want to state that these results are for fun. They can give an idea about the differences expected in this particular use case, but they are in no way accurate

The SHDD / SATA

This disk is supposed to be HDD with 8GB of SSD, connected on a SATA-3 interface. In a test like that, it is very unlikely that any part of the SSD was used.

Start up time: 27.04 seconds Login time: 9.42 seconds

This is not a bad result, although start up feels a bit slow. One can easily work on Kali on this, and I do not expect any severe issues in every day use.

The SSD / SATA

This disk is pure SSD on a SATA 3 interface. Solid state disks are in general (much) faster than the HDD counterparts, and I am sure that if one has I/O demanding tasks, it is the easiest upgrade, despite the cost. Last time I built a PC, SSDs were only for performance critical servers. Go figure...

Start up time: 22.62 seconds Login time: 7.15 seconds

This is not a bad result either. It was my setup until yesterday and never had any problems that I would attribute to disk speed. The improvement over the HDD is at a 4.42 seconds on startup, or 2.27 seconds on login. In other words, this is a reduction of start up time of 16.3% and of login time of 24%.If that reduction is in all I/O operations, that's quite a good number!

The SSD / NVMe

This disk is pure SSD, in a M.2 factor connected to the NVMe PCIe x4 interface. Last time I built a PC, that technology did not even exist.

Start up time: 20.16 seconds Login time: 5.98 seconds

These results are, as expected even better, but they represent only 10.9% and 16.4% improvement over the SATA3 SSD for the start up and login times respectively.

Conclusion

I know that the technologies provide different metrics. The PCIe bus is faster than SATA3, so the disks, even if technologies are the same, should be faster. But the performance is affected by many more parameters than just the disk.

For this particular use case, it wouldn't make sense to go for a PCI disk over a SATA3.But if I were a photographer for example, needing to read these heave photos from the disk to manipulate them, I assume the impact would be much more significant.