Jump to content

Use two physical drives to speedup defragmentation of large HDDs


PicoBot

Recommended Posts

Hi everybody,

my question is inspired by the following idea: When completly defragmenting a HDD ( e.g. called SmartPlacement by PerfectDisk ), it is often neccessary to "move out" files to make free space and then "move in" these files later.  Especially on big HDDs with several TB capacity, the defragmentation process will be severly slowed down by this step, since the r/w heads have to travel in and out very often. Therefore my idea is:  Imagine that you have one HDD which is heavily filled and fragmented, and a second HDD of identical size and geometry, which is totally empty. Now both HDD are taken offline at first, to prevent any changes during the defragmentation process. In a second step, all files on the first HDD are analyzed to make a detailed layout plan, where those files, directory stuctures and metadata have to be placed on the second HDD to achieve an exact, but already defragmented copy of the first HDD. And in a third step, the data is copied to the second HDD according to the layout plan. Of course such a hypothetical program would need very detailed knowledge about the file system which should be reorganized.

My idea is that this method would speed up the defragmentation process a lot, because it can read from the first HDD and write to the second HDD at the same time. The second advantage would be, that such a program would need much less movement of the r/w heads, which also significantly should speed up the defragmentation.

Has anybody of you ever heard that such a type of defragmentation program is in development ?

C.U. Picobot

 

 

 

 

Edited by PicoBot
Link to comment
Share on other sites


When you copy a file on a "target" disk it is copied (if there is enough contiguous available space) as contiguous, so when you copy a whole set (contents of whole volume) on another (just formatted) volume, it will arrive "defragged".

On NTFS there is an exception due to the placement of some of the filesystem metadata files "not at the beginning" of the volume, but it may affect at the most one or a few files, and only if they are extremely large and if they happen to be copied at the "right" moment. (as a side note remember how on NTFS it is a good idea to NOT fill a volume up to the brim, but rather leave some 5/10/15% unused)

Once upon a time, in NT 3.51 and early NT 4.00 times there wasn't a "defrag" tool built in in the OS and the "poor man's" solution was exactly that of (using XCOPY) to copy the contents to another volume and then reformat the original and copying them back to the freshly formatted volume.

What you propose is essentially a more sophisticated version of the "poor man's" solution above, with the added complication of - if I get this right - the need to phisically disconnect and reconnect the disks exchanging them and the BTW solvable, but not to be underestimated issue with Disk Signature and volume serials (unless you clear the source and copy back the data from the target, that will however take time) . 

It is a "nice" idea, but I doubt that its objective complication (a second "identical" - or bigger - hard disk available) is either worth it or it applies to more than a few, very particular scenarios is compensated by an increase in efficiency.

What makes "normal" defragmentation a bit slower is actually the availability of enough free and contiguous space on the volume, so - in case - it would IMHO make more sense to copy to the second volume only the largest and more fragmented files until you have enough free space, then run a "normal" defrag on the source and then copy back the temporarily moved files from second disk.

Another form of optimization (for the booting times only) was once (for the BartPE's and similar) to burn files on CD in the exact sequence they are loaded at boot time.
On CD/DVD (please read as very slow media) this had dramatic effects.
Later on USB sticks we tried the same approach with (obviously) decreased effects the faster was the stick, on (fastish) USB disk drives the boot time decrease was negligible, I have to presume that on internal disks it will be something that while it can be observed and measured by timing the process, is in practice unnoticeable.

The "only" advantage that I can see in this or similar approach on NTFS is that the $MFT will be at its minimal size, but - on the other hand - on a "normally used" volume the $MFT does not usually grow much more than needed (the exception being if you create a zillion files and you later delete them).

jaclaz

 

Link to comment
Share on other sites

All comments from Jaclaz on previous post are totally right.

But I would like to add something else, If you are using any of the Compact mode on all the drive or some of your folders (new compression available starting from Win10) 4K, 8K, 16K or LZX, if the compressed files contained on the drive or into a folder are moved in same drive, the files will not lose the compression, but if they are copied to same drive or another drive (even located on same HD) they will be copied uncompressed, highly increasing the used espace, then in this escenario it will be allways better to defragment the drive (this is a move procedure) and don't copy any folder that may content Compact mode compressed files.

alacran

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...