Jump to content

Windows XP is still king


Dibya

Recommended Posts

9 minutes ago, NoelC said:

I wouldn't mind seeing such a video.  Your description has me wanting to build such a system.

When a system can run in a configuration this small, it's hard not to have it boot up quickly...

SmallFootprintXP.png

Oh, and there's no such thing as "overkill" when it comes to CPUs.  More is better.  :)

-Noel

WOW!Your system seems to be a another beast

Link to comment
Share on other sites


That's just a VM pictured running on my beast.  I have a dual Xeon system, but it's an older generation (dual Westmere x5690s).  The host system running on the hardware (including 8 SSD drives in RAID 0) is Win 8.1 x64 Pro/MCE.

I have booted XP, Vista, Win 7, Win 8.1, and Win 10 and another fruit flavored system all simultaneously and the host system is still so responsive it is virtually impossible to tell anything is running.  I don't really NEED a dual Xeon E5 system, which doesn't mean I don't WANT one...  I'd be happy if my software builds were to take only half as long.

By the way, lest you think a modern Windows system is bloated, I've noticed a certain fruit flavored system runs 191 processes and uses up 2.34 GB just to have an otherwise idle desktop.  I haven't tweaked it for efficiency as much as I have my Windows VMs, though.

-Noel

Edited by NoelC
Link to comment
Share on other sites

Just for curiosity - some representative screen grabs showing my Win 8.1 host running multiple VMs, and the CPUs aren't even breathing hard...  I wonder whether an XP host would be able to run 6 different OSs in VMs simultaneously...  Note that I only screen-grabbed from one of my monitors.

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/XP_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Vista_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Win7_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Win81_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Win10_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/OSX_VM.png

-Noel

Link to comment
Share on other sites

On 2/3/2017 at 6:10 PM, JodyT said:

I have a hard time understanding how you're getting stuttering issues on WinNT6x, especially on a Quad Core.  Those system actually make MORE use of the hardware than XP does.  I'm baffled.  What is it that you're trying to accomplish?  There is no way that Vista or Windows 7 should have such difficulty, unless you're short on RAM or using an older GPU.  Just guessing though.

:)

I was using 4gb of ddr2 800 ram, I don't think the issue was 7 at all-yesterday I had a couple of crashes that seemed to be related to the video card.  I'm running a 4gb zotac geforce 730 card that seems to be the source of the problem, not sure what's wrong with it-but it only seems to run in a stable fashion on linux using the nvidia 340 driver (the 360 series has caused issues there as well).  Even my old 1gb 620 card was 100% reliable compared to the new card!

EDIT: Yep it was the zotac card, I chucked it for my old 620 and it works fine, I ended up going to Vista and everything is working well (with the exception of taking forever to update, but that's done now).

New Edit (Feb 26): Guess what?  I think my 730 troubles have been resolved after all-after a little bit of googling, I followed the suggestion someone made about strictly sticking to the driver that's on the CD the thing came with and all is well!  It's not the latest driver but it works without a hiccup now!

Edited by OldSchool38
New info
Link to comment
Share on other sites

5 hours ago, NoelC said:

Just for curiosity - some representative screen grabs showing my Win 8.1 host running multiple VMs, and the CPUs aren't even breathing hard...  I wonder whether an XP host would be able to run 6 different OSs in VMs simultaneously...  Note that I only screen-grabbed from one of my monitors.

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/XP_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Vista_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Win7_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Win81_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/Win10_VM.png

http://Noel.ProDigitalSoftware.com/ForumPosts/Win81/OSX_VM.png

-Noel

If I had to place a bet, I think the x64 Edition and Server 2003 variants would (in x64 builds).  If I could take half the money off the table, I think particular XP installations could do it.

:)

Link to comment
Share on other sites

I thought about XP x64, but my memory from when I ran it (for a total of about 2 years from 2004 to 2006) was that since it was so newly built for 64 bits there were still system data structures that were just too limited to do lots and lots of things simultaneously.  That being said, I never had the large quantity of RAM back then that I have now.

-Noel

Edited by NoelC
Link to comment
Share on other sites

11 hours ago, dencorso said:

XPSP3 (x86) probably cannot. But 7 SP1 x64 certaily can. I have it just for running VMs...

REally? My XP Sp3 PC runs too many VMS together (Surely PAE Patched)  . just for fun i have open many vms (XP Sp1 , LongHorn , Vista beta 2 , Vista Sp2 , 7 x86 ) on virtual Box . LOts of  ram was eaten but CPU usage stayed between (40 to 50%) . I have never faced ever eating of cpu over 60% unless i ran some renders on autodesk maya or 3dMAx . I donot know those two software eats so much . Any body knows it ?

Link to comment
Share on other sites

Things like software raytracing for 3D rendering are very compute-intensive.

Good CPU-intensive software tries to busy every available core to get its work done faster.  That's why you buy machines with ever more cores.  Otherwise a multi-core machine would be no faster than a single core machine at doing any one job.

It's quite an art to create software that scales its execution to available resources (e.g., CPU cores and RAM availability and bandwidth, as well as I/O capacity).  I've been working on perfecting that for quite some time in my own graphics software.  At the moment, I get the best results (fastest completion) when any given job is divided up into one thread per physical core plus one thread for every two Hyperthreading logical processors.  So, for example, a 12 core system with Hyperthreading  shows 24 logical processors, and my software by default creates 18 simultaneous threads to get its tasks done.  It may divide the task into many more parts, and that's governed by how much RAM cache is available, with the overall process being throttled by how much intermediate data can be stored in RAM. 

Thus the bigger the system, the more cores, the larger the cache, and the more RAM installed, the faster the results will be ready.

Coding the subsystem to manage that was really quite interesting.

-Noel

Link to comment
Share on other sites

4 hours ago, NoelC said:

Good CPU-intensive software tries to busy every available core to get its work done faster.  That's why you buy machines with ever more cores.  Otherwise a multi-core machine would be no faster than a single core machine at doing any one job.

Or alternatively - User might manually make 'em work Once I had to use a piece of software that run only single-core on 32-bit systems, some scientific computations. I had to perform the task many times with different parameters and make a comparison of a results. So I had started three different program instances, ran it three times, manually set the affinity, so one program sued only one core (Phenom X3 here) and computation time was no much bigger than running only single instance, despite slow RAM and HDD might have been a choking point.

To be honest, I don't know if setting affinities manually was any better than leaving it for system. As far as I've read (afterwards) Windows should do it well enough automatically with no noticeable performance drop, but it gave some (false) sense of control ^^

Link to comment
Share on other sites

4 hours ago, Dibya said:

No, it didn't.

You should take some time to learn how to critically read *any* statistical data.

Variations within +/- 1% or -more likely -  2% represent only the possible error in the sampling or data collection.

jaclaz

 

Edited by jaclaz
Link to comment
Share on other sites

2 hours ago, jaclaz said:

No, it didn't.

You should take some time to learn how to critically read *any* statistical data.

Variations within +/- 1% or -more likely -  2% represent only the possible error in the sampling or data collection.

jaclaz

 

9.07 to 9.17 % I thought like a general man . It may be something related to the mean of the data

Link to comment
Share on other sites

@mciwnn... Intel CPUs have the ability to scale and redistribute the work on other cores automatically, which is why, if you run a single thread application on an Intel CPU, you will see every core throttling between 60 and 76%, while on AMD you will see Core 0 at 100%, and the other at 1-2℅.

That's basically why if you run a single core application is faster than running as many of them as your cores and manually apply affinity.

For the records, does anyone remember the failure of the AMD FX/Bulldozer when they tried to run single thread programs?

Edited by FranceBB
Link to comment
Share on other sites

I always thought that the OS is spreading the work out on multiple cores, assigning threads to different logical processors when the kernel gets control during a system call - not some automatic processor feature.  I'm running a job right now (SFC /VERIFYONLY) that's got one core jammed to max and all the others idle.

OneCoreUsed.png

-Noel

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...