Jump to content

Windows 7: Possible, Advisable, to Disable the Page File?


Radish

Recommended Posts

I have yet to upgrade a system that came with an HDD to an SSD, so I can't speak on that. FWIW, the most dramatic performance improvements I've seen on various computers have come from (1) using ReadyBoost and (2) adding more RAM.

 

This thread has been educational for me. In this day and age of multi-GB RAM systems, I didn't realize that there was any real point to RAMdisks, with however the real danger that whatever valuable stuff you had on the RAMdisk would go *POOF* if and when Windows crashed.

 

--JorgeA

Link to comment
Share on other sites


An alternative might be to use something like Primo Cache, which separates the file system from disk via a dedicated low level cache and implements (very) lazy writes - though again there's the possibility of loss due to instability, and in my experience the cache subsystem adds a bit of its own instability.

 

With the lazy write process, if a temporary file is created, used, then deleted (which happens quite often), the data never ever makes it to the disk - which does reduce the I/O load quite a bit. 

 

It doesn't, however, appear to push performance up over what you get with the normal NTFS file system cache.

 

-Noel

Edited by NoelC
Link to comment
Share on other sites

...  I did go and read http://www.overclock.net/t/1193401/why-it-is-bad-to-store-the-page-file-on-a-ram-disk ... ... Having read that, and noting the range of differing opinion, I've decided that I might just as well experiment and see what works out okay for me ...

 

^ Wise decision.

 

Here's an awesome ramdisk roundup and testing: http://www.overclock.net/t/1381131/ramdisks-roundup-and-testing

Link to comment
Share on other sites

 

Here's an awesome ramdisk roundup and testing: http://www.overclock.net/t/1381131/ramdisks-roundup-and-testing

 

Yep :), though maybe now a tadbit "dated".

 

JFYI there were some news on the topic:

http://reboot.pro/topic/19929-make-ramdisks-even-faster/

 

And, though IMDISK is not among the faster ones, through the IMDISK Toolkit:

http://reboot.pro/topic/18324-imdisk-toolkit/

the issue of automatically backing up contents at shutdown have been recently solved (among other features, more image formats. etc.)

 

jaclaz

Link to comment
Share on other sites

Radish, SSDs don't have mechanical parts to wear out.  Yes, flash memory does have a limited life, but given a very conservative 1000 write cycles capability per flash block, you'd have to write 250 terabytes to a 250 GB SSD before getting close to wearout.  Most people would take decades to write that much.  Do a bunch of peer to peer networking and you might get that down to 10 years. 

 

Show me an HDD that will last that long.

 

Okay Noel, you've convinced me that SSD might be worth a try. My main reason for taking that tack now is not to do with performance, in terms of speed, but you do make valid points in your comparison of SSD to HDD - provided your quoted specs are accurate of course and I have some doubts. Where are you getting your figures from? Doubtless SSD manufacturers will be optimistic in their claims at best and indulge in pure fabulation at worst - so I'd take manufacturer's claims with a huge pinch of salt.

 

Nevertheless I'm going for a big ramdisk first and I'd even want to use that even if I did have SSD. Don't know when I'll get round to the SSD though, I am on a new system just bought now and it will take time to iron out teething problems I'm experiencing (not least of which is fairly often getting BSOD on shutdown) but when I do I'll doubtless be contacting this forum asking for help in how to set it all up - I'm not a geek so would definitely need some guidance.

 

One other question though. Are you saying that you're running systems with no HDD in them at all, only SSD?

 

Thanks for the thoughts.

Edited by Radish
Link to comment
Share on other sites

 

 I'd take manufacturer's claims with a huge pinch of salt.

 

My math is based on the tech inside SSDs, not manufacturer's claims.

 

Flash memory is good for many thousands of write/erase cycles (a good rule-of-thumb number has been 10,000).  Given that there's wear leveling and write amplification (owing to the way the internal controllers work) a figure of 1,000 write cycles for any given logical block on the disk is reasonable, if a bit conservative.  But you're right to be wary of claims.

 

The good news is that some tech reporting sites have taken it upon themselves to actually test write endurance of various SSDs until they actually fail.  Google "SSD wear test" or "SSD endurance test".  Maybe throw the word "torture" in there.  You'll find that, for example, in testing some 240 GB drives will actually run up to near 1 Exabyte (1000 Terabytes) of data writes before actually failing.  This shows my 1000 x capacity figure is a decent estimate of expected life for planning purposes.

 

My main workstation has 6 SSDs and 3 HDDs in it, along with two external USB HDDs.  The system boots and runs from the SSD array, backed by a HighPoint 2720 SGL PCIe RAID controller, 24/7.  The HDDs are only there for backup and very low access data, and they literally stay spun down virtually all the time.

 

My small business server in my office has 3 SSDs and 1 HDD in it, along with one external USB HDD.  Same reasons, same characteristics.  HDDs are only for backup and stay spun down.  The thing boots and runs from the SSD array (RAID 5 in this case) and literally stays cold to the touch.  Total system power draw is about 15 watts when the monitor is sleeping and the machine is idle.

 

I only have HDDs at all because I had them before I got the SSDs.  I have been running essentially off of SSD since April 2012.

 

My one piece of advice:  Don't skimp on the storage capacity.  If you feel you really need 100 GB, opt to get a 256 GB drive.  If you think you need 200 GB, consider getting a 512 GB drive (or better yet, a pair of 256 GB SSDs and set up a RAID 0 array).  SSDs run best when you leave a fair bit of free space (it's called "overprovisioning").  Actually any operating system runs best with a fair bit of free space, so it's a good idea to overprovision for multiple reasons.

 

SSDs actually RAID better than any HDD ever dreamed of, since there's virtually no latency.  You literally add up the performance of the individual drives right up to the point where the other parts of the system can't keep up.

 

My workstation can sustain about 1.6 gigabytes / second low level I/O throughput (yes, I said sustained throughput).  That becomes 3.5 gigabytes / second with caching.  Latency is something like 0.1 milliseconds.  This means that even if I have several really high demand applications (e.g., Photoshop, maybe some VMs, Visual Studio, Subversion, and virtually anything else I can want to use) all running simultaneously I just don't feel a slowdown.

 

By comparison the typical throughput of an HDD is 120 megabytes / second.

 

Try doing something like this with an HDD equipped system.

 

PracticalDiskSpeed.png

 

-Noel

 

 

 

 

P.S., if you want to dabble with the tech and get started for not much green, look on eBay specifically for OCZ Vertex 3 drives.  They're not overly expensive, and are the ones I've found tried and true in real usage (all my drives are OCZ Vertex 3 models).  Just now I saw three different Vertex 3 240 GB drives listed for under a hundred dollars.  These really work.

Edited by NoelC
Link to comment
Share on other sites

  • 1 month later...

And now, if you think that your conventional SATA SSD's are fast enough, it's time to get PciE ones  :w00t: that can seemingly run circles around them:
http://www.theregister.co.uk/2015/07/28/review_kingston_hyperx_predator_hhhl_480gb_pcie_ssd/

These numbers are crazy:
kingston_hyperx_predator_480gb_hhhl_pciekingston_hyperx_predator_480gb_hhhl_pcie

 

Here are the corresponding numbers for the Kingston Savage (said to be among the fastest "conventional SATA" SSD's around:

kingston_hyperx_savage_240gb_ssd_3.jpg

kingston_hyperx_savage_240gb_ssd_4.jpg

 

jaclaz

Edited by jaclaz
Link to comment
Share on other sites

I have numbers better than those in many categories using an array of "traditional" SATA III SSDs that are now 3 years old, and the ATTO numbers shown where reads and writes differ markedly imply that there are problems.

 

That being said, the numbers published above for that Kingston HyperX Predator are a good bit better than mine with regard to accessing tiny data blocks, and THAT's very significant.  High 4K numbers implies low latency.  The lower the better.  Note the comment about it not being NVMe.  That's significant too - it says that the hardware could potentially perform even better.

 

In practice, RAM caching - which Windows provides - makes small I/O numbers less an issue, though when reading a buttload of tiny (or fragmented) files that are not already in the cache a low-latency device will really shine.  This will equate to the system feeling more responsive on the first run of applications that haven't been run yet.  I'm imagining 1 second Photoshop cold startup times, for example (that happens for me in 3 seconds).

 

I'd love to see what the timing (in files enumerated per second) doing Properties on the contents of the root folder in drive C: would be on a system with that HyperX Predator serving as the boot volume.  480 GB is too small to be practical, though (says the man with 6 x 480 GB SSDs in his array).

 

-Noel

Edited by NoelC
Link to comment
Share on other sites

And now, if you think that your conventional SATA SSD's are fast enough, it's time to get PciE ones  :w00t: that can seemingly run circles around them:

http://www.theregister.co.uk/2015/07/28/review_kingston_hyperx_predator_hhhl_480gb_pcie_ssd/

These numbers are crazy:

kingston_hyperx_predator_480gb_hhhl_pciekingston_hyperx_predator_480gb_hhhl_pcie

 

jaclaz

 

Sorry but I'm not impressed. The 4K read speed (the one that really matter) isn't better than in a decent SATA3 SSD.

 

Wake me up when some new SSD achieves 100+ MB/s @ 4K read.

Link to comment
Share on other sites

Well, of course a single device cannot be compared to an array, and benchmarks not always reflect actual speed or speed increase in "real" operations, but comparing this thingy with the fastest "conventional SATA" SSD drive Kingston makes (also said to be one of the fastest around) as the good guys at The Register did:

http://www.theregister.co.uk/2015/07/06/review_kingston_hyperx_savage_240gb_ssd_storage/

seems to me fair enough, for completeness and to allow "at a glance comparison", here are the correspondent graphics for the Kingston Savage (which I will also add to the previous post):

kingston_hyperx_savage_240gb_ssd_3.jpg

kingston_hyperx_savage_240gb_ssd_4.jpg

 

@Telvm

Provided that the Savage is a "decent" SATA3 SSD, the net increase in 4K reads is still around 35%, not that bad as I see it.

 

jaclaz

Edited by jaclaz
Link to comment
Share on other sites

 

Sorry but I'm not impressed. The 4K read speed (the one that really matter) isn't better than in a decent SATA3 SSD.

 

 

That there is so much difference between read and write speed clearly means write-back caching is involved, which means those numbers aren't worth as much as the publisher would like you to believe they are. 

 

Any system with Intel RST drivers and sufficiently large RAM will show very high numbers for write speeds because of write-back RAM caching.  Conversely, no device you will encounter will be able to sustain much higher tiny I/O rates until something fundamentally changes in the PC architecture.

 

Never forget that there are limitations based on the operating system itself that influence the speed at which a tiny I/O request can be turned around.  That's why you'll notice that even the tiny I/O write speed shown is still topping out at less than 100 MB/second.

 

I've mentioned this before - 4K bytes divided into 94.91 megabytes is 24,297 I/O operations in 1 second, or about 0.041 milliseconds per operation.  Even with today's giga processors 41 microseconds isn't a whole lot of time to do 1 I/O operation.  It simply takes some base time for the CPU to call through the proper layers to do an I/O operation.

 

Assuming a virtually zero latency operation for the write-back cache, the difference you're seeing between the actual reads and the (instantaneous writes) is 112 microseconds - 41 microseconds == 71 additional microseconds to do the I/O from the flash memory. 

 

71 microseconds to complete an I/O operation across ANY interface is phenomenally fast.  That 0.071 milliseconds round trip.  I don't know the specifics for this card, but I'm willing to bet the lion's share of that time is actually getting the data across the various interfaces.

 

You are simply NOT going to see a separate device be able to return I/O data to a CPU a whole heckuva lot faster than that.

 

NVMe will help, as the stack is shortened.  But the data is still out on the PCIe bus, which takes time to use.

 

Now, when gargantuan blobs of flash memory are integrated right into the processor chipset, THEN we'll see much greater tiny I/O throughput.

 

-Noel

Link to comment
Share on other sites

Well, of course a single device cannot be compared to an array,

 

Actually, to me it seems a fair comparison in that you can plug, say 3 "traditional" SSDs into the SATA III ports on any given (modern) motherboard and make a super high performance RAID 0 array out of them.  Traditionally that's been cheaper and leaves the PCIe slot(s) free for your favorite game playing room heater.

 

Edit:  Furthering the thought...

 

Kingston HyperX Predator 480 GB at NewEgg.com today:  $484.99

 

Kingston HyperX Savage 240 GB "Traditional" SATA III SSD:   $91.99

 

Put three of the latter in a RAID 0 arrangement with Intel RST and spend the other $200 on a big RAM upgrade, and not only will you have half again more storage but I'm willing to bet the system will run better overall than with one of the "dollar a gigabyte" PCIe cards.

 

-Noel

Edited by NoelC
Link to comment
Share on other sites

True, the PCIe card solution can't really be compared to the less expensive multi-SSD solution, since the latter delivers far more capacity and is more expandable.

 

-Noel

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...