Jump to content

I need screenshots


fdv

Recommended Posts

Hi folks, I wanted to show the world how much memory can be saved without IE, so I created this:

http://www.vorck.com/windows/memory.html

It turns out, about 20megs of commit. In the grand scheme, with modern machines, that doesn't really make much difference. Dual and Quad core Pentiums are dropping in price. Memory is cheap. You can have a nice machine, really fast, in the $500-$900 range (and that includes building a Xeon-based server with a high-end PCI-X motherboard).

So I changed the focus of the page in order to show the memory of all Win32 based operating systems. (Mac OS X and Linux would be hard to compare since they use memory so much differently -- for example, Linux grabs all available and allocates it as necessary.)

I think it's very telling that we've gone from 60mb with Windows 2000 to 88mb in XP to 260mb in 2008 Server.

I need a new screenshot for Vista (I used something called a "Gamer's Edition") and I need an NT 3.51 SP5. BTW, you cannot even begin to imagine what kind of nightmare it was to get OS/2 running.

If you can make me a Vista SP1 or an NT 3.51 SP5 screenshot, and duplicate the arrangement I have where it shows the version in the upper right corner and the available memory in the Task manager (careful not to cut the bottom off) please send me a screenshot. Please, zip or rar it, and make it a lossless image like PSD, TIF, or BMP, not JPG or GIF. I'll pick the one that fits best with the other screenshots.

BTW the reason I posted this topic here is because I think the memory savings angle is most interesting to this crowd of tweakers...

Thanks

Link to comment
Share on other sites


In perfmon (start > run > perfmon), you can add the "Working Set" counter under the "Process" object, and select the "_Total" instance. Then, add the "Cache Bytes", "Pool Nonpaged Bytes", and "Available Bytes" counters under the "Memory" object.

How far off is commit? This would be quite an undertaking to redo all of this. Is my approach just totally wrong?

Is there no freeware app to 'shortcut' the process you outline above? I'm surprised it takes this much jiggering to get an accurate accounting of RAM usage. I'm surprised Winternals/MS never made an applet.

Edit: 64.17.159.209 will always get you there, Tain! :lol:

Edited by fdv
Link to comment
Share on other sites

Unless commit is drastically incorrect then I'd say you should stick with it. Commit is a commonly used metric for this sort of analysis and there is some value in sticking with convention.

Link to comment
Share on other sites

I'm just did a heavy software rebuild of my main machines and currently have few operating systems open.

I have eCS 1.2 OS/2 4.51 up and running as a real boot, but have no graphics installed in it as yet.

I have a VPC shot of OS/2 1.3, and the 2,00 + 2.10 is easy to come by, but these images were lost in the scrapedown. They should not be hard to make.

I have not used 3.0 or 4.0 for a while, though.

The windows 3.51 build on the #2 frame is currently dead in the water. Need to rebuild this.

Not sure of what shots to do, but will look for suitable things,

Link to comment
Share on other sites

How far off is commit? This would be quite an undertaking to redo all of this. Is my approach just totally wrong?

Is there no freeware app to 'shortcut' the process you outline above? I'm surprised it takes this much jiggering to get an accurate accounting of RAM usage. I'm surprised Winternals/MS never made an applet.

The Commit number in task manager is not memory usage in RAM, but the (peak/total/current) amount of pagefile-backed virtual address pages in use. These can be in RAM, but these can also be paged out into the pagefile (and thus, not RAM usage), and can also include pages in the dirty and standby lists as well that are backed, but have no mapping page in RAM (nor in any running process). Also, this doesn't count pages that are memory-mapped without backing (like kernel nonpaged pool, potentially the executive, event log data, etc).

In short, it's not an accurate measurement of a RAM footprint in the OS, because it only takes into account pages that are pagefile-backed and makes no distinction as to where these pages are, or if they're even still valid process-associated pages. This counter can be somewhat accurate at times, but it could be wildly inaccurate as well and you have no way of knowing for sure without using the method I mentioned above (that is always close to accurate no matter what).

Commit is a commonly used metric for this sort of analysis and there is some value in sticking with convention.
Convention doesn't always = correct. Again, it's a metric, but I don't think it accurately measures what fdv is actually trying to measure, as per my comment above.
Link to comment
Share on other sites

OS2fan - OS/2 shots would be great but I think I only need Warp 4 plus fixpack 15 or higher, and eComstation would be great. I can't get NT 351 to work in a virtual PC at all. I just don't understand but that would be appreciated too.

Cluberti... I get what you're saying now with the added explanation. These are a pain to build, is there no little applet from anyone that shows the correct info? Or do I have to load the management console and build this metric and do the math? Please tell me someone simplified this with a simple 32 bit binary...

Link to comment
Share on other sites

Benchmarking linux memory usage is not really harder because it allocates memory differently. It's rather because you can recompile the kernel with the options you want.

A personnaly configured kernel uses 15MB I think. A distribution one is much bigger though linux kernels don't need to be monolithic thanks to modules. :)

Btw, the bad thing I had noticed in windows about resource usage is having a peripheral installed used some of them and therefore plugging your mouse in all your usb ports (which makes/made windows install the peripheral on each one) made the resource usage grow. It was a few years ago when computers had much less ram, now you wouldn't notice it.

I thought of that issue as peripheral handling is linux (i.e. a module which can be (un)loaded at any time) is just much sexier than the windows way. However I guess it can only be done when you have the sources of the kernel or at least some headers and definitions.

Link to comment
Share on other sites

[maybe off-topic]

I guess that we can anyway agree that between NT4.00 (without explorer) and Vista there is a x10 factor.

That should mean that having (instead of the 64 Mb of RAM that made a NT 4.00 machine decently running), 640 Mb on a Vista machine, would make it as well decently running, but actually 1 Gb is what I see as the bare minimum to run Vista (almost, but not quite, unlike ;)) decently. :unsure:

A "bare" NT 4.00 would fit on a 100 Mb Iomega zip, if I remember correctly a "standard" install was about 150 Mb, 2K about 600 Mb, XP around 1.5 Gb.

Now, a "standard" Vista install is about 5Gb, with a multiplying factor of x33!

And all "newish" apps tend to be larger and larger.

So, besides the sheer memory requirements/occupation, I would like to see how the peaks go when doing on the two systems comparable everyday tasks like:

checking e-mail

writing a letter

using a spreadsheet to check your bank account balance

save some data on a CDR

Everyone has his theory, mine is that the key to swift computing is to downgrade, i.e. using yesterday software on today's hardware, as Microsoft have this tendency to add lots of (nice, mind you :)) "bell and whistles" to their software in such a way that at the time of it's release it will behave "fast" only on existing "top grade" (read MUCH expensive) hardware.

[end possible off-topic]

jaclaz

Link to comment
Share on other sites

I agree with the implied premise that things would run far faster.

Following this aside... Look at Windows Server 2008. It unquestionably has a lot of components that are necessary or have become necessary. I did not do a minimal Core install (for those who don't know, Server 2008 has a Core install mode where no GUI is installed). The message MS is sending is that the GUI is responsible for a lot of overhead.

Down on the bottom row, that's NT Server. I found out, accidentally, when installing NT that if EXPLORER.EXE (not IE, but the actual system file) does not get installed (in my case the file was not found at all during copy, it's a long story) the desktop will still appear, but you can't drag and drop or manipulate any desktop objects. Commit is the same without EXPLORER.EXE on NT4. The point of all of this is that I wonder what tragedy took place between then and now that Server 2008 can't have a very simple, lightweight GUI. It's either all or nothing, and that's unfortunate.

Link to comment
Share on other sites

Here is Win351mem.zip, a bmp of Windows 3.51 memory. I used norton utils to get further memory details.

It is basically Windows NT 3.51 (from the cdrom), with SP5 and a few apps installed. It dual boots with PC-DOS 5.02 :) It's a complete install, but it shows that Wendy is a newbie to networks :S

Took ages for the files to copy, (eg 6 hrs - i ran this bit overnight). Once it did, it installed relatively fast.

win351mem.zip

post-30458-1206094404_thumb.jpg

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...