Jump to content

Spikes in HDD benchmark


Recommended Posts

New WD 2TB, benchmark shows a lot of spikes rather than the smooth curve I was expecting

WD_2_TB.png

Samsung 2TB looks quite similar

Samsung_HD204_UI_Windows_on_SSD.png

I have a SSD now but before when Windows was on the Samsung, it was obvious that it was accessing the drive and affecting the benchmark

Samsung_HD204_UI_running_Windows_from_it

My Samsung 500GB in a USB3 enclosure produces a fairly smooth curve

Samsung_HD500_JI_USB3_on_X4_after_reboot

So any ideas why the internal drives don't look like that?

I also noticed that the WD has a much higher burst rate than the Samsung (303 vs 180 MB/s), although the other stats are pretty close. So would games and apps load faster from the WD or would the burst rate not effect that?

Edited by doveman
Link to comment
Share on other sites


Well, I would say that the first thing that jumps out is the radically different Y axis on the four benchmarks for transfer rate ( one uses 200 MB/s, two use 150 MB/s, one is 90 MB/s ) consequently the amplitude delta of the transfer rate plots aren't good for fast visual comparison. It looks like this to me ...

SATA Western 2TB aux ... +/- 25 MB/s
SATA Samsung 2TB aux ... +/- 12 MB/s
SATA Samsung 2TB sys ... +/- 85 MB/s
.USB Samsung 500 aux ... +/- 08 MB/s


So they're tighter than they appear at first glance. If there was some way to use the same scale for all plots then a visual comparison would make some sense.

But then you clearly have the anomaly of that Samsung 2TB when it was a system disk. The temp was way higher at the time ( 11 degrees C and 20 degrees F higher ) and the dead giveaway of a bad benchmark was that minimum of 1.8 MB/s during that test. These should generally be thrown away and the test re-ran because outliers cause exactly this kind of discrepancy later when you look back to rate its performance in an earlier case. Most likely it was any number of Windows 7 services ( housecleaning, update check, event logging, RAM paging out, perf monitoring, registry flush, CEIP compiling or transmitting, file relocation, indexing, polling ). Polling occurs often because even though Windows "waits" for you to not be doing something to perform optional tasks and maintenance, it still has to cut in and determine if you are doing something ( kind of a paradox ). Just run ProcMon for a few seconds to get an idea of how many things are really going on. Of course any realtime antivirus will sour a benchmark in a variety of ways since it literally monitors everything you do. If there is any 3rd party software running ( anything from an open application to a Google, Bing, Java, Flash, Apple, HP, Norton or other updater services ) they can also pop in invisibly to make a quick update check or whatever. A lot of these events consist of reading a file that is isn't paged in memory from the disk, in order to see when the last update was, or whatever, and this might occur during the test and tank the benchmark maybe giving you that 1.8 MB/s. In this case the fact that the tested disk is also the system disk has a big penalty, but many of those things can affect an aux disk too.

Things I do to get smoother, comparable numbers. First of all know that the first three tabs in HDTune offer text copy/paste which I always dump into a text file ( naturally on the first tab you run the test first or you get blank results ). The data can later be added to a spreadsheet and you can do some math to get averages, min, max, whatever. This of course implies that you re-run the test multiple times and discard any outliers. Run the test after a reboot, but not immediately after reboot though. Give it a few minutes. And don't run it after sitting idle for a long period either.

When I make these HDTune text files I make sure they are identical format internally ( CRLF matching ) because I can then load a bunch of these separate files into a text editor and fast-switch between them for an A/B/C/... comparison ( think of how astronomers discover comets and asteroids using photos and ignoring stationary stars ). So any big changes jump right out of the data like temp, speed, SMART stats, etc. These are things you need to know. But of course this is dependent on first screening out bogus data due to Windows or some Windows program coming along and cutting in to your party.

I also run HDTach right before or after HDTune for a sanity check. If CPU is high on either, or the data differ by more than a few MB/s then I re-run them until they are very close.

It's a very imperfect Science, in fact I'd say it is closer to Art at this point in time. Windows isn't designed with any real modes for gaming, benchmarking, or even core maintenance. Nor does it accept a request to "back off and do nothing while I play this game or benchmark my computer". It should, but it doesn't ( and even if it did, it wouldn't prevent 3rd party apps or realtime AV from jumping in, often you can go in and disable the realtime AV or at least most of it ). So it makes sense to create an Administrator account where you go in and set it up for minimum distraction. Disable everything not needed and only benchmark from there ( leave the other user accounts set up for daily use ). It still isn't anywhere near perfect, especially on later versions of Windows, but you can get closer to the desired "Idle" state that is needed.

I also try to control the physical parameters as much as possible. The temps of the drives should match over time when they are still in the same physical location. Differences in temp means there is something that needs to be addressed. Most likely it is either dust build up, a fan is not running at the same speed or dead, a ribbon cable or something has altered airflow, or something far worse. I wouldn't proceed until I got the temp back to where it should be ( note: this does NOT describe your case here ). In your case I suspect the drive was in a different physical location when it was the system disk, perhaps vertically in the front bottom without a fan blowing on it, or mounted above something warm and warm air rose to affect it. I've seen them all.

What I have been doing for many years is collecting spare 3.5" cages out computer cases ( some hold two, four, even more drives ). I turn them into standalone HDD racks with rubber feet and a handle ( remove the sharp edges, paint them etc ) and modify my own cases by removing any HDD cages ( leaving a big empty space there ) add in 120mm fans in the front bottom. Now I just drop in one of the cages with HDD's already mounted and just pop in the wires already dangling off the motherboard and power supply. It makes drive swapping, cleaning and re-arrangement very quick and thorough and I can also easily get to the fans. Most importantly it means all my HDD's are always located in identical conditions on any computer with a fan blowing directly on it, no major vibrations ( rubber feet ). It is a consistently controlled parameter, well as much as possible. One of these days I'll post some pictures which will make it more understandable. The main point being, controlling variables that can make benchmarks anomalous and non-comparable. Of course this doesn't help if a benchmark occurred in the distant past and your time machine is broken, I have many historical cases that fall under that category and nothing can be done about it now ). But at least going forward all drives get the same treatment.

Link to comment
Share on other sites

If you read his post, I believe he said "it depends". ie unless all of your tests are on the exact same system under the exact same conditions, under the exact same good temperature conditions, and preferably with every possible interfering app and service disabled, and the printouts done at the exact same scale, then you are really not comparing apples to apples and the "spikes" you see might or might not be real. This is especially true if your internal drive was the system drive, or had the paging file or %temp% on it. That will invalidate any direct comparison to any other drive. If you can redo your tests to meet these criteria, print the graphs using the same scale and repost, then we can more accurately answer your question.

Cheers and Regards

Edited by bphlpt
Link to comment
Share on other sites

OK, well those conditions are a bit beyond the average user (including myself), so how are we meant to benchmark a drive and check it's operating normally?

None of the HDDs are the system drive now I've got a SSD, the temp files go onto a RAMdisk and the only pagefile is on the SSD (which I normally disable as well but I've got it enabled at the moment). My Documents is on the Samsung HDD though, as a lot of games create files in there (and the subfolder Saved Games) so it gets rather large and I also like to keep the data separate from the System drive as it makes it easier to do System Images and Data Backups separately.

Link to comment
Share on other sites

it makes sense to create an Administrator account where you go in and set it up for minimum distraction. Disable everything not needed and only benchmark from there ( leave the other user accounts set up for daily use ). It still isn't anywhere near perfect, especially on later versions of Windows, but you can get closer to the desired "Idle" state that is needed.

Benchmarking from a Windows PE that would have the needed drivers included is also a good idea.

Link to comment
Share on other sites

Benchmarking from a Windows PE that would have the needed drivers included is also a good idea.

Sure but maybe killing anything unneeded with Task Manager would be just as good and perhaps a bit easier (depends on whether you already have a PE with the drivers installed or if you have to spend time building it).

It's probably also important to test with the drivers you're actually using in your normal OS, as testing with different drivers in PE could return a good result but the proper OS drivers might not behave the same, so you could be assuming that it's all functioning as well as it should be when it's not.

Link to comment
Share on other sites

All true. But the difference between "optimal" benchmarking and "typical" is very important. I think optimal conditions should be strived for here because they constitute the best case scenario. All normal-use cases will fall somewhere below that. The best case benchmark results then can become a goal to strive for in normal use by tuning and tweaking ( you know, eliminating un-necessary services and stuff ) and though you will never match them you do know what the machine is capable of and limited to.

It is similar to when you weigh yourself. It is best IMHO to record your minimum weight ( ~ahem~ after you wake up and do your business ... ). This is because you only have one minimum weight at any given moment, but many different "normal-use" weights during the course of your day ( + several pounds even if you are trim ). You can easily skew the historical record by weighing yourself at inopportune times and then you scratch your head thinking the scale is broken. That 1.8 MB/s was clearly an inopportune time ( it recalls the PIO days of hard drives, and believe me when we finally got disks and controllers that routinely did 16 MB/s and more it was like Christmas ).

One thing that slow benchmark result does underline though is the propensity for even modern Windows to slow to a crawl during normal operation. You can still get CD/DVD coasters and have a game freeze up and have Explorer become unresponsive and any other number of sour experiences. However, one thing that slow benchmark does not show is a bad or failing disk or even a bad Windows installation. And that is precisely the problem. It is business as usual.

When running benchmarks on a typical system ( without all the painstaking optimizing ) I think it best to run them a few times. And like I mentioned, if you own HDTach, run it as well to get an idea if something is anomalous by comparing them. In one sense though you already have useful results because when the drive is slaved you have better numbers showing that the drive hits expected performance when Windows isn't dragging it down. So it kind of could be used accordingly ( were you to ever use it as a system drive again ) as a guide to optimizing. But now you're on an SSD so it is a moot point. SSD drives are the best thing to happen to Windows design in a long time as their access time and throughput thoroughly swamps the design flaws of Windows, burying the normal-use collisions of everyday tasks and services.

Oh, the original question: "So any ideas why the internal drives don't look like that?" ... blame Windows.

The later question: "All I want to know is if the spikes are normal and if so, why I don't get them with the USB3 drive?" ... normal for Windows - on those particular test runs. Run them again to have something completely different to wonder about though. That USB disk result using that very short Y axis is essentially showing the interface throughput capped, whatever spikes were present from the HDD itself were smoothed by the lower bandwidth of USB, it acts like a low pass filter. In short, USB was slower than the worst performance of the physical disk in that test. However, If there was a way to use the USB disk as the primary system drive, and if it were to get hammered by some task or process that drives it down to 1.8 MB/s again, then you would see some downward spikes.

All in all it looks pretty much business as usual given the circumstances.

Link to comment
Share on other sites

OK, I'm somewhat reassured if you think the results look normal :)

However, the USB3 drive is actually a Samsung 500GB SATA drive in a SATA->USB3 enclosure and I've had it connected previously both as straight SATA and e-SATA (which are essentially the same thing) and the curve looks much the same there with those as well (in fact, even more smooth), so it's not the USB interface that's producing better results than the internal drives, nor is the USB capping the throughput really as the results from all three different connection methods are pretty much the same.

This is e-sata, where the Burst rate is somewhat lower than both SATA and USB3 but that's probably just an anomaly.

Samsung_HD500_JI_e_sata.png

and this SATA

Samsung_HD500_JI_SATA_on_X4.png

For the WD and Samsung 2TB drives (with Windows on the SSD) I did start the test a few times on both it looked the same each time at the start, so there was no point letting the test run to the end. It just seems strange that the WD 2TB and Samsung 2TB 3.5" drives would be up and down all over the place, whilst the Samsung 500GB 2.5" has a nice smooth downward curve.

This is the SSD running Windows. I'll have to re-do it with Windows running from the HDD, which is easy enough as I still have it installed on the Samsung, to see how it compares. I've got a couple of Windows installations on the HDD, so I'll check which is the "cleanest" and then disable as much as possible to test again to see if it makes any difference.

Samsung_840_Evo_256_GB.jpg

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...