Jump to content

NTFS tweaks


BigDaddy

Recommended Posts


I'm guessing it's the "Last access timestamp" if you use it in conjuction with their software.

When I tried UltimateDefrag it asked me to turn it on. Uninstalled it soon after. As a defrag software, it's pretty tweakable, but you need to run it every couple of days, or the hard disk will get clogged up again. It was most famous for it's graphical representation of your drive as a circle, claiming that all others are wrong. But guess what - they're wrong too. Just try it on a drive with multiple partitions - every one is a full circle. No way it's that way on the physical level. I doubt, with modern hard disks, you could even overcome the 'abstraction' of the HDD electronics and peek into the platters.

I'm guessing again, this time they made it as service or similar that runs all the time.

So, that's my opinion about this software - just flashy marketing gimmick, but no real improvement over other defragmenters.

GL

Link to comment
Share on other sites

^ The graphical diagram is only symbolic - it's not meant to be taken literally. They show a full circle for each partition so that it's easier to represent the data that way.

But the relative positioning of data is accurate. So say if you got Akon01.mp3 right on the inner track of D: drive, it is indeed at the starting of D: drive physically. What confuses people is that they now think Akon01.mp3 is at the starting of the disk or platter, which is wrong understanding. When you're considering the whole drive, Akon01.mp3 would actually lie after C: drive.

Thus, the program cannot position the data on ALL partitions to the beginning of the drive - that can only happen on C: drive.

Link to comment
Share on other sites

UltimateDefrag is a great program but their website and marketing are designed entirely for people born yesterday.

Any defragmenter that can bring your fragmentation down to 0-1% and also perform file placement can achieve the same as what UD claims. They are by no means the first to do so.

File Modification Date

- File.doc was modified yesterday

- File.rtf was modified today

File.rtf goes before (closer to the beginning of the disk) File.doc because due to the dates, it is assumed that you use the .doc file more often than the .rtf so it gives priority to your most commonly used files while placing your least commonly used files nearest the end of the disk because their access performance is not as important.

File Access Date

- You opened the .DOC file in M$ Word yesterday

- You opened the .RTF file in M$ today

Same as above but in regards to access dates as opposed to modification dates.

Bottom line: Is there any noticable performance difference between either of these methods? Negligable. If the files are completely defragmented and strategically placed in some way, the file read/write performance will be significantly improved (approx 40%) compared to a drive that has several thousand fragmented files and being 20% or whatever percent fragmented overall.

There are other methods of file placement such as file/folder name (alphabetically), File Creation Date as introduced in the latest version of UD (1.64) released yesterday. Will it make your HDD any faster than it was with files arranged by either Access or Modification Date? No.

Technically, there will be 4-10ms (or similar) ms (milliseconds) difference in the time it takes the HDD's arm/reader to go from one file to another depending on the order they are arranged, but still, performance will seem the same to the general user.

During file access, the arm reader has to do many queries. From what I understand it goes something like this:

If you have a file such as:

C:\Documents and Settings\User\My Documents\My Received Files\User\Image.jpg

The arm reader has to query each subdirectory consecutively in the MFT (back and forth) until it reaches the actual file.

So you double-click Image.jpg and the arm goes like this:

MFT > C:\ > MFT > Documents and Settings\ > MFT > User\ > MFT > etc... until Image.jpg.

UltimateDefrag also has an option to place directories (green) next to the MFT so there is less time/distance between back/forth queries.

Now my understanding may be slightly off but that is the jist of it. Accurate if not exact.

So when people debate whether PerfectDisk or Diskeeper is better, in regards to actual real-world performance, you will not know the difference. One of them goes by Access Date and the other goes by Modification Date.

There are a lot of products that can do file placement, but some do not do it as well as others.

PD, DK, Vopt, UD, O&O Defrag, MST Defrag, JKDefrag and a few others. There are about a dozen and a half more but are not to be bothered with.

I hope this information helps. Anyone can feel free to clarify, correct or expand on what I have said.

Edited by Jeremy
Link to comment
Share on other sites

@ [deXter] : I agree with you, but they tout all over the place that their circle is the right thing and everyone else is wrong. They could've made rings and, even with the abstraction layer from the drive, it would be more truthful representation.

GL

Link to comment
Share on other sites

Windows xp comes with a feature called prefetcher that automatically keeps a log of every file you've opened, when the computer is idle it defrags the computer using the prefetcher data.

-gosh

Link to comment
Share on other sites

so if the 7.5 dollar answer is NTFS disk compression I still do not see how could we gain from 40 to 500% increased performance?

Exactly - this is not a very good "real-world" paper, and it also seems they're more interested in defragmentation performance (speed, time to defrag) rather than actual performance of opening/saving files on disk (and even then, it appears they tested entirely with text files........... :blink:). If you want to increase disk performance, any disk defragmenter that works with the native APIs (like perfectdisk, diskeeper, etc) is a good second choice after creating and formatting the partition correctly for the device(s) underneath it, as well as the intended usage patters of the disk(s) before actually attempting use. Keeping a disk defragmented is important, but making sure it was created correctly is just as important (maybe more so?).

Link to comment
Share on other sites

To all, a couple of users at donationcoder.com got together and purchased the file. It is all about NTFS compression and claiming that, with text files, you can gain access speeds up to 500% greater than what you would normally get. See the thread mwb posted above for a more detailed discussion of what we have found.

Link to comment
Share on other sites

To all, a couple of users at donationcoder.com got together and purchased the file. It is all about NTFS compression and claiming that, with text files, you can gain access speeds up to 500% greater than what you would normally get. See the thread mwb posted above for a more detailed discussion of what we have found.

I saw that, and that was the reason for my previous post - it's a crock and there's not a real-world scenario that I can even think of where this will hold true.

Link to comment
Share on other sites

Well, there's a few cases where it can improve performance, one example would be PlatformSDK lib/include folders, but the savings are negligible in the real world imho, and I can't think of any better examples.

I've never liked DiskTrix anyway, marketing overhyped etc... and what's with the "disk layout" in UltimateDefrag? That would only make sense if they had a database of all harddrives and their internal platter organization anyway...

Link to comment
Share on other sites

Years before the PC and the word "Defrag" entered into our vocabulary, I and other Mainframe techs were doing what amounts to a defrag but we didn't call it that.

The old hard drives got fragmented and had holes in the data pattern just like modern drives do.

We'd just copy everything to another drive or tape and then reformat the HD and copy everything back.

Voila! NO fragmentation and no holes.

I employ that same regimen today with my 200 gig SATA2 drives.

I first clean up the drive, removing every file possible that's not needed.

I even delete the pagefile and old restore files and of course all the temp files.

I do that with batch files on my Ghost boot disk.

Then, I make a Ghost backup image of my C: drive and save it to D:.

Then I verify the integrity of that file, again using Ghost's "Check" option.

Then when I know the Image file is viable, I do an Immediate RESTORE of that file

back to C:. The entire drive is rewritten as if it were new. All the files are in perfect

order and of course there is NO spaces and NO fragmentation.

When I'm done and check the drive in Windows Defrag, it looks something like this:

MyDrive.jpg

In less than fifteen minutes, I've backed up my entire C: drive and done a great defrag.

The green area at the end of the blue data, is the Pagefile, remade by windows on the first

boot after the Restore.

It don't get much better'n that!

B)

Link to comment
Share on other sites

The "NO spaces" part isn't exactly optimal. That can end up causing more fragementation. Say for instance you have a Word document and you edit that document. Now instead of the drive having a little extra space right after the file to write those changes, the changes have to be written on another part of the disk. Thereby fragmenting the file. Now you say "well, that doesn't happen often", but how many ini, tmp, log, cache, registry hives, etc, etc files are accessed/modified by Windows during normal operation?

People complain that "Defrag utility XYZ doesn't consolidate all the free space". The above explenation is one reason why a good defrag utility won't.

Defrag utilities (particularly the latest version of Diskeeper...not sure about others) account for this when defragmenting files. It'll take files that are used a lot but not edited (executables, dlls, etc) and put them together. Other types of files will be written so that updates can be placed on the drive with little to no fragmentation. It's not possible to remove all fragmenatation, but you can sure minimize the effects of normal operation.

For those of you wondering about Diskeeper 2007 vs. the others, here's a good article. I'm using this method on a SQL2005 x64 cluster with great results. The difference was noticeable after just a few days (Diskeeper has to "learn" the usage patterns).

I'm not saying that other defrag utilities aren't good, I just know what Diskeeper 2007 has done for us. It's much improved over previous versions.

Edited by nmX.Memnoch
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...