Jump to content

Does copying several Giga Bytes on a daily base screw the Hard Drive


Recommended Posts

Hello

I am recording some TV Shows on a daily base in a .MPG format, and about an hour equal one giga byte in size.

I've one PC (old one) dedicated for this purposes only (recording using a DVB-S 420 PCI card), then at the end of the day I am copying what had been recorded to my external hard drive and do the encoding (cut, merge, ...etc) on another more strong computer.

My question is copying several giga bytes per day screw the hard drive?

Or using some special copying tool may keep the hard drive healthy?

Please advise?

Link to comment
Share on other sites


i don't see why copying data to external hard drive should wear it faster than letting it powered on the same amount of time.

But other than that, i am perhaps missing some point about the wear level of hard drive internals.

Link to comment
Share on other sites

Everything wears out eventually (especially with moving parts). Also, you may notice an increase in the drive temp (during the move / copy process, or normal operating temp) or hear more noise coming from the heads as they seek the data scattered throughout the drive. On a healthy drive the heads wont make much noise, on failing drives the heads when working will get louder, this is when I plan on replacing the drive - I have worn out drives, mostly when I was recording a lot on my PC. The drives wore out, they didn't fail (I didn't wait for them to), as the drive began to idle at higher temps and / or made a bit more noise, I checked the power-on hours and decided if the drive had served it's purpose, if so, I replaced it. I suppose I've been lucky, I haven't had to deal with a drive getting numerous bad sectors, I personally would ditch this drive (as cheap as they are now) - no bad sectors for me (IMO).

I use 2 external usb enclosures with 320GB drives for storing my ripped CDs - mostly to .flac files but some are single images, 700+MBs in size. I rip the CD to .wav, edit, then encode to .flac or single audio image. I will then move 4 or 5 GBs at a time from the slave drive to the external enclosure - this is a workout for a drive but it's what their designed for - cooling is important.

On a low hours drive there should be little concern (but any drive can fail at any time regardless of age or hours). If the drive is getting up there in age (power on hours, power on count) you will want to watch it, but don't worry about it, just don't store data on this drive if you can't afford to lose it :)

I like TeraCopy for moving data from the slave drive (inside my PC) to the external enclosures - http://www.codesector.com/teracopy.php - not sure if there's a file size limit with the free version.

HTH :D

Link to comment
Share on other sites

Thanks for your help.

What is the tool to use it from time to time to check the hard drive health status?

There are several tools for copying like tera copy and wind mend file copy, so when comparing tools, what are the key factor to compare according to?

Link to comment
Share on other sites

What is the tool to use it from time to time to check the hard drive health status?

There is NONE (actually none that has ANY kind of reliability).

The only meaningful data we have is from a (now getting a bit dated) google paper:

http://labs.google.com/papers/disk_failures.pdf

that proved beyond any possible doubt that the only tool you have, which is the S.M.A.R.T. technology (one that I personally call "D.U.M.B."), is substantially m00t and has NO actual capability of forecasting a drive failure reliably. :(

The only things you can do are:

  1. keep them drives as cool as possible (this is the MAIN thing)
  2. power them with a "good" power supply unit
  3. reduce as much as you can the number of power on / power off cycles
  4. do a CHKDSK periodically (say once every month or so) after the CHKDSK, check SMART data , if Realloc counters have grown sensibly, change the disk drive,
  5. every nine months or so, flip a coin, if it's head, replace the drive, if it's tails, keep it until next coin flip (please note that on average and on a sufficiently big number of drives this approach will not produce results substantially different from a dedicated hardware parameters monitoring program and/or SMART based prediction)

SMART readings are "mostly useless" in the sense that whilst a drive that starts increasing some of the few meaningful SMART parameters should be replaced as soon as possible (INDEPENDENTLY from the result of the coin flip), the amount of drives "simply failing" without any SMART error in advance is so big that the coin flip is a good enough approximation.

There are several tools for copying like tera copy and wind mend file copy, so when comparing tools, what are the key factor to compare according to?

There is no actual difference in the different tools/utilities, to write a given number of bytes the drive spin motor needs to do a given number of turns and the set of heads needs to travel back and forth a given amount of time.

The only thing is of course whether your copying strategies do provoke a heating in the disk drive (intensive disk activity, like as an examle cloning a drive as RAW may increase slightly the temperature), but since you have a good cooling solution for them (#1 above: keep 'em cool) this is UNlikely to change anything.

jaclaz

Edited by jaclaz
Link to comment
Share on other sites

Two terms I am not quite sure that I am understanding them quite sure

  1. flip a coin
  2. S>M>A>R>T or SMART

Please may you make it more simple to understand or any image to get a better idea?

Thanks for your time.

Link to comment
Share on other sites

Two terms I am not quite sure that I am understanding them quite sure

  1. flip a coin
  2. S>M>A>R>T or SMART

Please may you make it more simple to understand or any image to get a better idea?

Thanks for your time.

A coin flip is anything that will give you (on average) 50% (or very near to 50 %) of probabilities:

http://www.facade.com/coin_flip/

http://www.codingthewheel.com/archives/the-coin-flip-a-fundamentally-unfair-proposition

S.M.A.R.T. or SMART:

http://en.wikipedia.org/wiki/S.M.A.R.T.

Play on words on normal meaning:

http://www.thefreedictionary.com/smart

http://www.synonyms.net/synonym/dumb

See also ;):

http://en.wikipedia.org/wiki/Flipism

jaclaz

Edited by jaclaz
Link to comment
Share on other sites

  • 2 weeks later...

jaclaz is trying to make a bit of a joke, which is sadly being lost in translation here. The simple fact is that hard drive failures can be difficult to predict, and usage characteristics may or may not be the cause of failure in your particular hard drive. I've had many, many hard drives in my systems over the past few years, some of which I use constantly (I've got a pair of hard drives that I write and delete about 1-2TB of data to each week), and others that I only write to occasionally.

What jaclaz was trying to say was that, in my situation, it could be any one of my drives that fails next. The ones that I write to more often may last me another 3-4 years, or they may die tomorrow. The fact that I write a lot of data to them is often moot.

Link to comment
Share on other sites

What jaclaz was trying to say was that, in my situation, it could be any one of my drives that fails next. The ones that I write to more often may last me another 3-4 years, or they may die tomorrow. The fact that I write a lot of data to them is often moot.

And additionally (and WITHOUT ANY actual data to back up this statement :ph34r:) jaclaz's absolutely random observations tend to lead to the conclusion that the more dense (pardon me the pun ;)) the magnetic media is, the more it will be prone to some kind of failure. :w00t:

Seriously, anyone remember the BIGFOOT drives?

http://en.wikipedia.org/wiki/Quantum_Bigfoot_(hard_drive)

And as a matter of fact they weren0t even particularly reliable....

The industry is currently "jamming" (I don't think there is a better term for it) increasingly huge quantities of data in the same space (the 3.5" form factor).

It is quite natural that with Tb hard disk the precision of mechanical parts needs to be very very high and even the smaller unbalance or tolerance due to wear is more likely to produce a problem.

http://www.forensicfocus.com/index.php?name=Forums&file=viewtopic&t=7552

http://www.forensicfocus.com/index.php?name=Forums&file=viewtopic&t=7552&start=16

Usual sound advice:

http://www.storagedepot.co.uk/buying_guides/drive_reliability.aspx

jaclaz

Link to comment
Share on other sites

I'll take S.M.A.R.T. over nothing any day.

Doctors can't predict (guess) when we will die, they simply let us know we are accumulating bad sectors or some parts don't spin up as fast as it used to :)

I still visit a doctor once in a while and have a look at my drive's S.M.A.R.T. details, without either I would be in the dark as to whether any problems exist.

Link to comment
Share on other sites

I'll take S.M.A.R.T. over nothing any day.

Doctors can't predict (guess) when we will die, they simply let us know we are accumulating bad sectors or some parts don't spin up as fast as it used to :)

I still visit a doctor once in a while and have a look at my drive's S.M.A.R.T. details, without either I would be in the dark as to whether any problems exist.

Yep :) and faith in something is often capable of keeping one's mind at rest.

In my perverted mind something that is implemented with the scope of predicting a failure and succeeds in around 50% of cases is not something to have faith in.

If you read this paper:

http://www.seagate.com/docs/pdf/whitepaper/enhanced_smart.pdf

you may better understand what it was actually designed for (Predictive Failure Analysis BTW).

Verbatim from the paper:

Mechanical failures, which are mainly predictable failures, account for 60 percent of drive failure. This number is

significant because it demonstrates a great opportunity for reliability-prediction technology.With the emerging

technology of S.M.A.R.T., an increasing number of predictable failures will be predicted, and data loss will be

avoided.

This is theory.

The google study is practice (and actually on a large enough scale to be reliable).

A verbatim from it:

After our initial attempts to derive such models yielded relatively unimpressive results, we turned to the question of what might be the upper bound of the accuracy of any model based solely on SMART parameters. Our results are surprising, if not somewhat disappointing. Out of all failed drives, over 56% of them have no count in any of the four strong SMART signals, namely scan errors, reallocation count, offline reallocation, and probational count. In other words, models based only on those signals can never predict more than half of the failed drives. Figure 14 shows that even when we add all remaining SMART parameters (except temperature) we still find that over 36% of all failed drives had zero counts on all variables. This population includes seek error rates, which we have observed to be widespread in our population (> 72% of our drives have it) which further reduces the sample size of drives without any errors.

jaclaz

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...