Jump to content

Planning for performance


Mr Snrub

Recommended Posts

This is a quick overview of performance concepts, and my personal approach for how I configure my Vista machines for continual good performance.

With a bit of planning beforehand, the system on which the OS runs can provide a way to avoid certain situations where performance can be impacted - some of those situations possibly requiring a rebuild (or at least backup, reinstall & restore) to resolve.

You may disagree with my methods, but I'm not presenting this as a "guide 2 make ur computer da fastest" (or to start a big debate), more "food for thought".

If I have "heavy background tasks" to perform whilst wanting to play a game, then I give the tasks to one of my other machines.

I am not one of those people wants to burn DVDs at maximum speed while playing MP3s, downloading massive files from the Internet, rendering 3D images, chatting on IM and multi-boxing MMOGs - so bear that in mind when thinking how inapplicable this may be to you.

I am, however, a gamer who likes to have a browser & mail client up on my second screen while playing games, and I have been building PCs since my first 4.77MHz XT with 640KB RAM, running MS-DOS 3.3.

FOREWORD

The key to high performance is about avoiding bottlenecks.

In terms of raw I/O or operational speed, the order is something like: CPU > CPU cache > RAM > disk & network

A CPU bottleneck is resolved by adding more physical or logical processors - most home-built systems will have a single physical processor slot/socket, so the only option (if supported by the motherboard) is to go swap the physical processor for one with more logical processors.

(But make sure the bottleneck is not being caused by software bugs first!)

CPU cache is a resource on the physical processor itself, and it can improve performance in some situations, but not a resource that tends to generate problems by being exhausted or unused.

RAM I/O bottlenecks are resolved by adding more or faster memory sticks, or making sure you take advantage of dual channel if available (i.e. sticks in the right slots, paired).

Disk bottlenecks are probably the most common - processes running on the system are waiting for an I/O operation to complete, and when there are lots of queued I/Os or concurrent I/Os on the same channel.

There are several ways to avoid disk bottlenecks, depending on the type of issue you are facing.

Network bottlenecks are generally not so easy to resolve, but also once you get to GigE network speed (especially with onboard NIC chipsets) you will tend to get a CPU or disk bottleneck first.

For gamers a bottleneck may also lie in the graphics adapter, or how it is using system memory rather than its own onboard memory - in this case "bigger, better, faster, more" is pretty much the way forward.

Updating graphics card or DirectX drivers can only do so much, don't expect a 10-fold increase in performance from changing driver v104.324.77 to v104.324.91 - unless of course there was something REALLY wrong with the older version.

PRE-SETUP

Before letting OS installation media get anywhere near a computer, the hardware and its configuration are the first to address.

If buying a motherboard or spec'ing an entire system, I look for the price break - where the features & spec of the hardware has price in my range and just before the leap in price where "bleeding edge" technology is found.

MEMORY

Once the motherboard is identified and research has been done to get other users' experience (to avoid BIOS, performance, feature or driver issues), I look at the maximum amount and speed of RAM the system supports.

I install the maximum amount of RAM the system supports, at the fastest speed - RAM is something that is so very useful and these days so cheap to acquire, it doesn't make sense to leave room for expansion later.

CPU

For the CPU I tend to find the price break again, but also take into account the FSB before the number of cores and amount of cache.

The cost of computer kit is ever-declining and the CPU is a trivial single item to update, so I don't waste money now by getting the fastest available today - as not only will it be cheaper in a couple of months, there will most likely by faster ones (which drove down the value of what I bought).

For my workstations at home I use dual-core CPUs, as quad-cores at the same FSB tend to be too expensive and would not benefit me for gaming.

For servers, on which I tend to run simultaneous virtual machines, I would get the benefit of more cores, so would consider it.

There is also the consideration of matching CPU and RAM FSB speeds to aim for a 1:1 ratio, but I honestly have no idea as to the "real world" value of this - that's the kind of analysis they do at Tom's Hardware or such.

DISK

As I said above, the most common bottlenecks I encounter are in the disk subsystem (YMMV) and there are several causes as well as ways to address this.

One of the problems with the legacy IDE controller was that the I/O through the channel (primary or secondary) runs at the speed of the slowest device (master or slave) connected to it, and having 2 devices on 1 channel is a bottleneck.

SATA disk controllers have 1 device per channel, as well as being faster throughput, lower power consumption or tidier cabling - so I use SATA hard disks exclusively.

If I bother with an internal DVD drive at all, I am not concerned as to whether it is alone on an IDE or SATA port - the bottleneck there is likely to be within the device itself anyway.

I have bought a couple of USB 2.0 external DVD-RW drives which I can hook up to whichever machine I need them at that time, to avoid the need to replace/upgrade more internal components.

When most people think of improving disk performance they think of RAID, however having tried a 10,000rpm Raptor RAID0 system I am not convinced as to the value of these on home systems and I take a different approach.

RAID0 will give you plenty of pure performance for single I/O requests, but can suffer from multiple I/Os on the same channel as much as a regular disk configuration, and doesn't avoid fragmentation problems.

RAID1 can, in some systems, give better read performance IF it supports reading from either disk (not all do) - but a write I/O operation is only "complete" once it is written to both disks and so can be slower than a single disk.

RAID is taking a single I/O request from a process and converting it into multiple I/Os for the underlying disk subsystem, so isn't a "guaranteed win".

The more parties involved in a single I/O, the longer it will take and the more prone to delays or errors it is.

Be aware that typical use of a workstation differs from that of a server:

- software is installed & uninstalled more frequently

- temporary files are created & deleted through user activity more frequently

- prefetching is based on user process activity rather then pure file requests

Servers can benefit a lot from RAID disk subsystems (SCSI 15,000rpm disks being common).

Multiple smaller disks are better than 1 massive disk in terms of performance as the I/O can be split across them simultaneously, if you arrange the data correctly.

You can also reduce the problem of fragmentation by physical separation of the data on disk- more of this in the SETUP and POST-SETUP sections, though.

RAID0 and multiple individual disks naturally increase the risk of data loss in the event of a single disk failure, but to me having a decent backup strategy is preferable to a zero-downtime system for non-critical environments.

The multiple individual disk approach is my favoured at present as the "system down" risk is the single disk with the OS on - and that can be clean-installed in ~35 minutes even from a DVD.

32-bit or 64-bit Windows?

This is also a question that you have to ask before doing any purchases:

- 64-bit may not have (signed!) drivers for all your hardware, but virtually all 32-bit applications should work just fine

- 32-bit (non-Server) has a limitation of up to 4GB of physical RAM (even with PAE), so you might not get use of all installed memory

There is no upgrade path from 32-bit to 64-bit - a clean install is required.

Similarly, a downgrade from 64-bit would be a do-over.

SETUP

With the hardware in place, setup can begin - and one of the very few questions posed by a Vista/W2K8 installation is the volume on which to install the OS.

I take the first 100GB of disk0 - this will be the active, system, boot, pagefile and crash dump volume, and in the same way that changing a hardware configuration is tricky later on, trying to work around the problem of skimping on the boot volume size can lead to a reinstall.

My strategy for installation is performance over "best use of resources" - I am not concerned about wasted disk space as much as I am about unused disks.

The rest of setup is standard all the way to the desktop appearing.

POST-SETUP

PARTITIONING

My main system at present has 2x 250GB disks, and C: is the only drive letter I use for local hard disk storage... so how do I use the other 400GB?

Volume mount points.

The remainder of disk0 is partitioned, formatted and mounted as C:\Data - this volume is used for files that will not change often (possibly ever), such as MP3s, pictures, FRAPS movies, ISO images, driver installation folders, etc.

All activity on the boot volume will not fragment this static data, nor will the opposite occur as I add more files into C:\Data.

As the static data is not accessed frequently, I also don't generate lots more I/O for disk0 - it is going to be almost entirely OS I/O.

Disk1 started with no partitions at all, and then I came to install my first application (World of Warcraft), which from experience I know at present take ~8GB.

Again, skimping on the space allocated to a volume can cause problems later, so I make sure to give plenty of space for patches, mods and expansion packs - I made the disk GPT format and created a 25000MB simple partition which was then mounted as the empty folder "C:\Program Files (x86)\World of Warcraft".

Now I install the application into the new mount point.

Why GPT instead of MBR?

MBR's biggest limitation is the maximum of 4 primary partitions on a single disk - also, if you end up creating extended and subsequently logical partitions with drive letters assigned and later add an additional hard disk, it throws the lettering out because how primary partitions are enumerated before logical ones.

At this point I have disk0 access almost exclusively for Windows and the in-box utilities, and 25GB of 250GB of disk1 is now allocated exclusively for WoW - remainder of disk1 is neither partitioned or formatted.

As the data for WoW is on a separate channel and physical disk from the OS, launching the game is very quick as disk I/O for the OS and the game are occurring in parallel.

Also it can never fragment the data of anything else, so the little fragmentation that will occur (patches & expansion packs) will only affect itself and can be addressed specifically with:

defrag "C:\Program Files (x86)\World of Warcraft" -w

After WoW I installed BioShock, another 25000MB volume on disk1 (this one so far seems to have used <6GB) and a volume mount point of "C:\Program Files (x86)\2K Games\BioShock".

Same principle as WoW for performance and fragmentation, and there is no chance I will want to load both at the same time so no I/O bottleneck.

And so on, and so forth for Crysis, Mass Effect, Jack Keane, Assassin's Creed and S.T.A.L.K.E.R. - the last chunk of disk1 I mounted as "C:\Program Files\Other Games" for those smaller games that can coexist without heavy fragmentation being likley.

A picture is worth 500 DWORDs...

diskmgmt-494x388.png

Remember, fragmentation isn't just about disk write opeations mixed together - when data is removed from the disk it leaves a hole which is then filled by the next write - so uninstalling a large application months after it was installed can cause the next chunk of data to fill the gap first, fragmenting a brand-new install.

So while Crysis might not be fragmenting itself as it's not patched regularly or have masses of addons or saved games, when I come to uninstall it there will not be a fragmentation problem caused by that event.

Yes, there is lots of slack/wasted space in my configuration, but it is because I put the data physically as well as logically where I want it.

I have no illusions that I will still have most of these games installed in 6-12 months anyway, so they can be uninstalled and the volumes remounted elsewhere as needed.

Do I miss drive letters for things like "D: = DATA, G: = GAMES, M: = MP3s"?

Nope, that is why things like folder shortcuts and the quicklaunch toolbar exist - and even if I did, I could use SUBST as needed.

PAGE FILE

Where should I put the page file and how big should it be?

I always leave it alone, and let Windows determine what it needs and where it should put it.

The more customization you do to the OS, the more you have to understand the impact of it - and some decisions you make can lead to problems later.

Some apps expect/demand a page file exists, and refuse to launch if you have selected "do not use a page file".

For the root cause of a bugcheck to be even approaching likely requires a MEMORY.DMP file - this cannot be created if you've set the page file settings incorrectly (too small, wrong location) or if you don't have the necessary free disk space on the destination volume.

Hence the importance of a partitioning strategy before you start.

There are those that think the page file gets heavily fragmented and causes a massive reduction in performance - I am not convinced that there is any proof of this on modern machines, a lot of performance issues are actually perception errors.

Similarly, fragmentation occurs over time or on a very heavily used & abused system, so running a full system defrag daily is overkill.

SUPERFETCH

Superfetch is always blamed for performance issues due to it being misunderstood - people see lots of disk I/O, trace it back to this service and disable it thinking it is part of the "bloat" of Windows.

It uses lower priority I/Os and idle time to do its disk access to populate the system cache with files that form the pattern of "typical usage" for you - which varies from person to person so is built up over time.

If you want to load something from disk, Superfetch's I/Os are pushed to the back of the queue and yours are serviced immediately, so there is no delay but possibly a perception of "omg my disk is being thrashed".

The system cache fills up virtual memory, so people panic that Task Manager reports "free" memory as very low - however as the cache is simply data that is already on the disk in its original location, if memory is needed it can be instantly freed and allocated - so effectively "Unused and immediately available memory = Cached + Free".

As I type this, of my 8GB physical memory, Cached is 6433MB and Free is 509MB - I happen to know that my system cache is mostly filled up with huge files from World of Warcraft minutes after I boot and log on, making it very fast to launch.

When I would consider disabling Superfetch:

- systems with low RAM and no ReadyBoost available

- laptops (lots of disk I/O generates so much heat, and the disk are very noisy & slow)

- virtual machines (lots of physical disk I/O to the .vhd file, chance of double-caching)

- temporarily on systems I am going to be rebooting a lot (e.g. hotfixes on a clean installation, installation of kernel drivers requiring reboots, etc.)

TO SLEEP, PERCHANCE TO DREAM

"Shave 13.2 seconds off your boot time with this simple list of hacks!"... I never understand these kinds of actions - when programming I looked at optimizing code a lot, and one common trait for optimization is to look not where you get the biggest single saving, but where you get the most overall.

i.e. Optimizing a routine that is called one time to get a 3-second saving is not as useful as a 0.1 second saving in a routine called multiple times per second for a few seconds.

The boot process is a peculiar one to focus on optimizing - the system is starting from completely empty and has to go through various initialization routines before the user can interact with the OS to start apps running, it needs to do this just once and possibly not again for weeks.

Sleep - don't shut down.

Suspending your system state to RAM means you should be able to bring it back to operational status in under 2 seconds, complete with the system cache preloaded.

Your boot process not only has to go through the initialization and self-checking routines, but it starts with an empty cache, so Superfetch has a lot of work ahead of it when the system is eventually idle - and if you log on & start launching your apps immediately it won't have had the chance to prefetch anything.

Some devices may not wake correctly when resuming from sleep, this is a fault of the device itself, its driver or the BIOS not handling power state transitions correctly - I had a system (AN9 32X Fatal1ty) where the onboard network adapters did not wake properly and I had to disable & re-enable to get them out of the "no network cable connected" state, which was annoying.

DISABLING SERVICES

I will always advise against the disabling of services on the premise that:

- you don't need them

- more services running = wasted CPU and memory

- they slow down the boot process

While you may not explicitly use the services provided by a particular process, without a great deal of experience managing Windows systems you will not know the potential impact of disabling them - maybe not today or next week, but when ApplicationX refuses to install with a random "unspecified error" it could be a pain to track it down to a service that is on by default having been disabled.

Services that are not used or do not do anything will consume no CPU time, their threads will be in the WAIT state, not READY TO RUN - also the memory they have allocated is virtual and so eventually will end up paged to disk and not be consuming much precious physical memory.

As for slowing down booting, see above - the boot process isn't something that should be occurring that much ideally.

Disabling services is for the most part a placebo - yes, there are some situations were services consume masses of CPU time due to bugs, corruptions or misconfigurations but these should be addressed rather than side-stepped.

IDENTIFYING BOTTLENECKS

Even though this was meant as a way to provide some tips on how to avoid/prevent performance issues, I will mention briefly the tools at your disposal built into Vista to help with identifying which resource is bottlenecked:

- Task Manager, for an overview of realtime CPU, virtual memory and network I/O

- Resource Monitor, for a detailed view at realtime/recent CPU, disk, network and memory I/O

- Performance Monitor, for a longer-term statistical view of system counters for pretty much everything (any counter with "queue" in the name is of particular interest)

The Resource Monitor can be interesting to watch after booting Vista - observe how Superfetch starts to load files it thinks for apps it thinks you are going to be firing up soon, populating the system cache.

Performance tuning, and designing a configuration for peak performance is, like security, a journey and not a destination.

There is no guarantee this is how I'll set up my next system, but come Windows 7 I will most likely have some idea of what worked & what didn't - maybe just fine-tuning the volume sizes, for example.

How you use a computer can strongly influence how you benefit from different methods of performance tuning - there is no "silver bullet".

Always use measured methods to determine performance levels, not user perception - baselining is something I reckon hundreds of thousands of gamers do for their graphics card alone (using 3DMark or similar), but neglect the common components in the system.

Sometimes it is worth doing empirical tests such as observing the impact of removing anti-virus and then comparing with another vendor's product - and be aware that in the case of kernel drivers where is a big difference between disabling a product's services and uninstalling it.

Link to comment
Share on other sites


How did you get a 25,000 MB partition? ;)
When creating a partition you specify its size in MB, I couldn't be bothered to calculate 25x1024 so I just rounded down and used the "hard disk manufacturer" version of 1GB = 1,000MB.
Link to comment
Share on other sites

So RAID isnt best for perfomance?
In my experience it didn't make a perceptible difference in system performance moving from a single 7200rpm SATA disk to 2x 10000rpm Raptor SATA disks in RAID0.

Again, this is my experience from my style of using Windows and applications, so other people may have different experiences.

I had XP x64 and Vista x64 on the same RAID set during Vista's beta period and it was clear after a while that Vista's Superfetch out-performs XP's prefetch once it has had chance to build a history on your commonly-used apps.

Reading the data from disk before you need it is so much more useful than being able to read it on demand a fraction of a fraction quicker.

Would a Raptor be preferable to a 7200rpm drive?
High RPM disks are useful for sustained, contiguous disk operations - if your data is fragmented then the disk will spend a lot of time seeking inside of reading.

If applications read a lot of small files then high speed doesn't help so much.

For me, the cost of 10k over 7.2k rpm disks is not worth it, plus the Raptors are noisy beasts.

Link to comment
Share on other sites

So RAID isnt best for perfomance?
In my experience it didn't make a perceptible difference in system performance moving from a single 7200rpm SATA disk to 2x 10000rpm Raptor SATA disks in RAID0.

Again, this is my experience from my style of using Windows and applications, so other people may have different experiences.

I had XP x64 and Vista x64 on the same RAID set during Vista's beta period and it was clear after a while that Vista's Superfetch out-performs XP's prefetch once it has had chance to build a history on your commonly-used apps.

Reading the data from disk before you need it is so much more useful than being able to read it on demand a fraction of a fraction quicker.

Would a Raptor be preferable to a 7200rpm drive?
High RPM disks are useful for sustained, contiguous disk operations - if your data is fragmented then the disk will spend a lot of time seeking inside of reading.

If applications read a lot of small files then high speed doesn't help so much.

For me, the cost of 10k over 7.2k rpm disks is not worth it, plus the Raptors are noisy beasts.

Actually, vice versa. With many small files, it's better to have a lower random seek latency.

That's the benefit of a Raptor.

Link to comment
Share on other sites

So RAID isnt best for perfomance? Would a Raptor be preferable to a 7200rpm drive?

It totally depends on what you're doing in the first place, what RAID level (0, 1, 10, 5, etc), what drives, what controller, stripe size, etc. There just isn't a one-size-fits-all solution that's better at everything.

Again, this is my experience from my style of using Windows and applications, so other people may have different experiences.

That's what it comes down to really.

Personally, I see a gigantic difference in speed for my use with RAID 0. RAID 0 doesn't help one bit when it comes to latency, but that's not what I care about. What I care, is max throughput (how much MB/sec I can move around) while copying lots of large files around (lots of encoding, maintaining disk images, digital media, etc), and RAID 0 clearly wins there, there's no contest.

10k rpm drives have lower rotational latency (and usually lower seek times in general), so it helps for loading tons of small files, but more often than not, I see MUCH cheaper drives with a far better max throughput than them, and with a far better capacity to boot. 10k rpm drives are just too bloody expensive and WAY too small for me, and they make a lot of heat/noise too. 15k rpm drives are even worse... Sure, in a server loading tons of tiny files from disk every second (like serving web pages), sure, it helps a lot, but when you're moving an ISO image around or such, you can find cheaper & bigger drives that will do that faster.

I don't play games, so no idea about those... A $50 vid card is overkill for me :lol:

Different tasks have different bottlenecks too. By far my main bottleneck is CPU (tasks like encoding H.264) and disk I/O. Again, it ultimately comes down to what someones does with their PC.

Link to comment
Share on other sites

It totally depends on what you're doing in the first place, what RAID level (0, 1, 10, 5, etc), what drives, what controller, stripe size, etc. There just isn't a one-size-fits-all solution that's better at everything.

...

Again, it ultimately comes down to what someones does with their PC.

Lets say, for example, I wanted faster boot times (OS) and faster loading times for applications, which would be better:

1x 74GB Raptor

or

2x 320GB Caviar SE16 (in RAID0)

Link to comment
Share on other sites

Lets say, for example, I wanted faster boot times (OS) and faster loading times for applications, which would be better:

1x 74GB Raptor

or

2x 320GB Caviar SE16 (in RAID0)

I sure wouldn't go for such an old raptor myself. It's just not that amazing by today's standards, it's still quite expensive, and it's tiny.

Haven't looked at that specific model's speed either lately (the 320GB caviar)

Nowadays, you can get some great drives, with lots of space, and great performance too. A single 640GB WD drive (new model, the WD6400AAKS) is nearly as fast as a 300GB velociraptor at most things (not enough difference for most people to tell), and that's for a single drive. The big difference? The 640GB WD6400AAKS is $90, whereas the 300GB velociraptor is $330 -- about 4x the price for only half the space, all that for a ~15% faster boot with Vista, saving you all of 30 seconds a month or so.

For half the cost of such a fancy drive, you can get a pair of WD6400AAKS's that will give you over 4x the space, and beat the raptor on a lot of tasks.

Besides, Vista boot time shouldn't be a huge concern like Mr Snrub said -- just make it sleep. You shouldn't have to reboot too often (making the velociraptor even more pointless). And app startup times should already be very minimal with SuperFetch (your common apps will already be cached in RAM in advance), again, making the very expensive velociraptor less and less worth it.

Link to comment
Share on other sites

For what its worth i have 2 Samsung 1TB Spinpoints in a Raid 1 on an ICH8R controller and they are so much faster than my last Boot Volume which was RAID 0 config on 2 X 40GB Maxtors again on ICH8R. I would agree with Crahak that ultimately it comes down to what you do with your pc, but i seriously doubt there are many Consumer level drives, even raptors that are faster than these things! :)

Edited by eyeball
Link to comment
Share on other sites

For what its worth i have 2 Samsung 1TB Spinpoints in a Raid 1 on an ICH8R controller and they are so much faster than my last Boot Volume which was RAID 0 config on 2 X 40GB Maxtors again on ICH8R. I would agree with Crahak that ultimately it comes down to what you do with your pc, but i seriously doubt there are many Consumer level drives, even raptors that are faster than these things! :)

Those new spinpoints have very fast read & write speeds for sure (better than the WD6400AAKS I was mentioning). Of course it's going to be a LOT faster than a pair of very old maxtors :lol:

Link to comment
Share on other sites

Well, those were a lot of words that said a lot of nothing... I was expecting, given that this is the Vista forum, that this would be a thread about getting the most out of Vista, not "dumping your life savings into a ballsy Vista computer so you can run the trash that is full size Vista at the speed of yesteryear's XP".

May as well be a spokesman for Newegg. If you want to plan for performance, start your article with the phrase "First, go to vlite.net".

edit: It's also quite wrong. As with all Microsoft products, Windows has no goddang clue where best to put the page file or how big to make it. It just puts it whereever it pleases, using its ancient page file code from NT 3.51. Micropartitioning also doesn't get you any extra performance when you break it down into programs. You don't even take into account the disk access time in different "zones", nor the fact that application data itself never changes. If you're going to partition, the proper way to do it is "System, pagefile, applications, users", in that order. The system needs the fastest access (top of the drive), page file lies between system and programs, applications after that, and static user data like the profile get put at the end.

And that's only after glancing at like, two of the sections I read.

Edited by Volatus
Link to comment
Share on other sites

You don't even take into account the disk access time in different "zones", nor the fact that application data itself never changes.
You think? Why do you think I put the OS in the first partition on disk0, which is not 100% of the disk?

And the remainder of the disk is for non-changing data, so random access to the last 60% of the disk is not going to be an issue for the static data?

The applications on disk1 are in different partitions and will not be used concurrently, so no battle for disk I/O or random seeks aplenty there.

With 8GB RAM and the fact that the Cache Manager uses virtual memory, the pagefile is going to reside on... drive C:, maybe?

Which shares I/O with... the OS... of which large portions are cached... meaning reads from RAM or the pagefile.

Conversely, putting the pagefile on a drive other than the boot partition leads to issues with creating dump files, and if it is a separate partition on the same disk as anything else to "avoid fragmentation" then you now increased your average seek time between I/O requests from Cache Manager/Memory Manager and the other partitions.

Micropartitioning also doesn't get you any extra performance when you break it down into programs.
You are thinking short-term here, this strategy is about maintaining a level of performance, not designing a system that runs great from a clean install but over time starts to degrade. I think that is one of the main reasons people have traditionally done clean installs periodically - not because the OS is bloated, but because of how the data is (not) arranged on disk.

The main points I was trying to get across that you missed were avoiding bottlenecks by having multiple disks to allow concurrent instead of queued I/O - your setup is how I used to partition single-disk setups years ago.

And to cover it one more time, for how I use the computer, my configuration has proven to be the most efficient - there is no "one size fits all" and planning for performance requires analysis.

Looking at where your data is placed while taking into consideration how you use it and how the OS uses it, understanding how NTFS writes data and how virtual memory and the Cache Manager work, then testing and monitoring with performance tools to compare strategies is the only way to say what works best for a particular usage scenario.

As I tried to make clear, it's "food for thought", not presented as THE way to do it, but some of the things to consider when next doing it.

Link to comment
Share on other sites

Conversely, putting the pagefile on a drive other than the boot partition leads to issues with creating dump files, and if it is a separate partition on the same disk as anything else to "avoid fragmentation" then you now increased your average seek time between I/O requests from Cache Manager/Memory Manager and the other partitions.

Exactly :)

dumping your life savings into a ballsy Vista computer so you can run the trash that is full size Vista at the speed of yesteryear's XP".

May as well be a spokesman for Newegg

A computer that runs Vista decently isn't anywhere near as expensive as you make it out to be. Besides Vista has some features that makes it faster than XP in many ways (like SuperFetch, greatly reducing disk usage hence delays when you start an app, as it's already cached in your free RAM)

If you want to plan for performance, start your article with the phrase "First, go to vlite.net".

If you do know better than Mr Snrub then why not write the article? Most people around here would agree Mr Snrub is VERY knowledgeable (I wish I had half his WinDbg skills). He wasn't made a mod for no reason either, obviously some people agree. And Vista runs just fine here (no slower than XP did), without using vlite at all. There's just no requirement for this.

As with all Microsoft products, Windows has no goddang clue where best to put the page file or how big to make it. It just puts it whereever it pleases, using its ancient page file code from NT 3.51.

So nobody knows where but you? You do realize than saying you know better than Mr Snrub and all the experts at Microsoft who built the system in the first place (many of which have PhD's in Comp Sci and more in-depth knowledge about Windows' internals than all of the MSFN users combined) comes across as pretty arrogant? (That reminds me of the 7 bit text file thing again, or calling people names for not agreeing with a certain network map...)

Pagefile could go on another disk (more I/O speed and usually less seeking -- but like Mr Snrub said, potential problems) and not further into the disk (just more seeking -- no real advantages over a defragged & well placed page file in the first place). Besides:

1) Vista is not as dumb as XP when it comes to what to page to disk. XP just tries to page it all to disk to have the most RAM free seemingly (free RAM is wasted really). Vista isn't quite that drastic, so that memory page you need won't as often be paged to disk in the first place

and more importanly:

2) Vista now has I/O prioritization, so if it needs that memory page that's been paged to disk badly, it's going to get it first, and make the not-so-important stuff wait (like loading that spacer gif from your browser's cache), and I/O intensive processes/operations aren't going to make everything else (like getting loading that page of memory you need from disk) wait quite as long. It also makes use of NCQ, and even has various strategies based on different access patterns (the I/O subsystem has been very much improved overall)

So it's not quite as critical as it used to be.

Mr Snrub has a pretty sound strategy to avoid heavy disk fragmentation (and bottlenecks), and there's really nothing wrong with that. He also had several other good points, like using volume mount points instead of like having 63 different drive letters (I'm a pretty heavy user of those too -- A/B letters wasted for floppies, then a dozen HDs with often more than 1 partitions, a card reader that mounts as 4 drives, a DVD writer, 1 or 2 daemon tools virtual drives, mapped network shares, truecrypt mounted drives, USB memory sticks / mp3 players / USB hard drives, etc -- you VERY quickly run out of letters!)

Edited by crahak
Link to comment
Share on other sites

If you want to plan for performance, start your article with the phrase "First, go to vlite.net".
I would have to disagree with this - if you need to use vlite to give your Vista box performance increases, then you are doing something wrong or have a box that meets the miminum specs and not much more (ever try to run XP on a minimum spec box for XP? exactly). You are welcome to your opinion, and your tweakage - but this article is pretty much spot-on for long-term performance design for Vista.
As with all Microsoft products, Windows has no goddang clue where best to put the page file or how big to make it. It just puts it whereever it pleases, using its ancient page file code from NT 3.51.
First, that statement is incorrect in so many ways I can't even think of where to start. The pagefile is created during setup in the largest contiguous free space available, and usually near the first third of the drive. It can (and likely will) get fragmented if it needs to increase size over time, as it is inevitable that there will be little to no free space near the end of the current paging file during resize. This is not something easily programmed around - would you rather it get created at the end of the drive? It cannot grow there without being defragged and moved offline, so that's not possible. How about the middle of the drive? If files get placed anywhere after the paging file, it cannot grow without fragmenting, so the initial problem remains.

So I ask again, where should it be placed during setup? Windows is going to place the paging file on the boot volume for (amongst other things) recovery purposes - yes, a paging file on a separate volume is preferred for disk fragmentation, but you get no dump data if the box bugchecks nor any log of the problem in the event log if said dump is not generated.

Micropartitioning also doesn't get you any extra performance when you break it down into programs. You don't even take into account the disk access time in different "zones", nor the fact that application data itself never changes. If you're going to partition, the proper way to do it is "System, pagefile, applications, users", in that order. The system needs the fastest access (top of the drive), page file lies between system and programs, applications after that, and static user data like the profile get put at the end.
With the speed and size of today's hard drives, you don't get markedly better performance anymore in trying to lay all of your data out on specific parts of a drive. At this point in time in hard drive and controller technology, as long as the data is fragmented as little as possible you will get good performance (especially in Vista). 6-10 years ago this might have been something to worry about, but at this time it really isn't.
And that's only after glancing at like, two of the sections I read.
Please re-read entirely before bashing a post again. And, I've read your other posts (like your partitioning post, for instance), and have not commented on how I disagree with it on newer hardware (that maxtor drive is slow and old). Your theories may be right on old hardware and Win9x or 2000 (or even XP on older hardware, to a lesser extent), but this post is about newer hardware and Vista. Please limit your comments about this article to what this article is about - most people receiving Vista going forward are going to run it on newer hardware, and as such this is entirely accurate.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...