Jump to content

Making xp faster - 2 tricks


esecallum

Recommended Posts

by using 32k clusters instead of default 4 k either NTFS OR FAT you can make make xp and your hard drive much faster as fewer read write operations are required by at least 32k/4k = 8 times!

also by disabling performance logging/montoring useing win2000 group policy control software you can make xp about 20% faster.

anyone tried it?

Link to comment
Share on other sites


20%? No way.

Yes in theory doing those things will make it faster but not enough to notice. I'm not sure about there being a group policy setting for performance. I use the Extensible Counter List tool. http://www.microsoft.com/downloads/en/details.aspx?FamilyID=7ff99683-b7ec-4da6-92ab-793193604ba4

it worked for me the extensible counter tool which is what i meant.

but also the larger cluster sizes reduces the splitting of files to fit them and re-assemble them.

next time i will try 64 k clusters.i wish we could do it up to 1.024 megabytes.with large drive dirt cheap the slack does not matter.

everyone should try this by using the larger cluster sizes.

Link to comment
Share on other sites

I agree with -X- here. It shouldn't make anywhere near that much difference (20%), even on a netbook class CPU...

by using 32k clusters instead of default 4 k either NTFS OR FAT you can make make xp and your hard drive much faster as fewer read write operations are required by at least 32k/4k = 8 times!

Except, that's NOT how it works. Like the other point, I very much doubt you've benchmarked this before coming to this conclusion/recommendation.

Bigger clusters mean reading less clusters to read the same file indeed, but the bigger clusters are made up of more sectors, making the point entirely moot. Reading a 1MB file made up of 256 4KB clusters of 8 sectors each, or a 1MB file made up of 32 32KB clusters of 64 sectors each still means reading 2048 sectors from disk regardless. There's zero gain there. The only real gain you may have is slightly less fragmentation (if you never defrag and don't have some kind of automatic solution) when your file is made up of less chunks.

In fact, in some cases it's going to hurt performance, like when reading a small file (4KB or less e.g. cookies, ini files and what not) or the last cluster of a file which is mostly empty, which will result in reading lots more sectors for nothing and as such hurt performance somewhat. Bigger cluster sizes also means wasting more disk space. Using larger clusters may also affect other things, like reading/writing to the swap file (windows uses 4KB memory pages which matches 4KB clusters perfectly)

Do a proper benchmark and you won't bother with it anymore. In a lot of cases, larger clusters will actually slow down your system a bit. The main gain of larger clusters would be for specialized applications, like on a drive storing lots of large files. And even then, I don't personally see enough of a difference to bother there -- even on a video server, where all files are hundreds of MBs each.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...