Jump to content

Mr Snrub

Patron
  • Posts

    765
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Sweden

Everything posted by Mr Snrub

  1. QFT - in fact due to Windows File Protection and the fact that a lot of the system binaries have open handles which prevent overwriting while the system is running, it's more likely that malware masquerading as OS files will be running from another (often temp) folder.Another trick that has been used is to have lookalike executable names - dllh0st.exe, for example. If suspicious of a running process, use Task Manager or Process Explorer to view the command line rather than just the process name - that can often give clues as to what a process is.
  2. Should be, this is actually a very common scenario for Citrix Server installations that use M: as the boot drive letter - however when multiple disks (not partitions) are involved then it can be a bit trickier. I would guess that the BIOS enumerated the drives and the one you selected was the second of the 2 it found - sometimes it can help to unplug (or disable in the BIOS) extra hard disk or optical drives (even card readers sometimes) so that there is no chance for another drive letter to be assigned for %systemdrive%.To check if you might run into problems, temporarily disable the other disk in the BIOS (or unplug it) and verify the system boots and Windows reports it is still on E:. One possible issue you might find is that the first disk has the boot loader and is marked active, but the OS is installed on the second disk - if you remove/disable either disk then the system won't boot.
  3. I am closing this thread by request as it got derailed and there is a lot of information for the OP in the reply posts to chew over. Clearly it's a matter of opinion as to which tweaks (a) can be made safely without immediate or down-the-line side effects and (B) will actually make a difference to performance, or just the system startup time.
  4. That would be Visio (I have the 2003 version at home) - I pretty much use it solely for network diagrams, occasionally office floor plans.
  5. 24/8Mbps connection - not very accurate for the closest speedtest site it seems: A bit more believable when testing a few hundred kilometers away at the southern end of Sweden:
  6. Internet connection is 24Mbps down/8Mpbs up with 5 public IP addresses. I use a (UPnP) NAT router anyway, so only 1 IP is typically used - but there is a switch connected to the cable modem so I can hook clients in there if another public IP is ever needed. WLAN is WPA-encrypted, provided through the same GigE router, which simplified my network cabling a bit. Client1 is my wife's primary machine, for gaming & graphics work. Client2 is my machine, for gaming, debugging, etc. Client3 is my wife's secondary machine for legacy apps that don't like 64-bit Windows (probably retiring soon as a gift to a friend). Client4 is the machine in the guest bedroom for visitors, with a handful of games installed. Server1 is my old client machine, acts as a file server primarily and is from where the Squeezebox streams its music, also runs Virtual Server as my sandbox for hotfix testing, debugging and "poking to see what it does" scenarios. I have the website running in a VM for portability as I'm too lazy to try to figure out how to set up SQL Express every time I want to reinstall or upgrade the OS on the bare metal - once I upgrade my current rig it will become the new server, and is hypervisor-capable - that's the time at which I will have the website running on W2K8. (Vista clients + W2K8 file server on GigE network using SMB 2.0 leads to very nice file transfer speeds - the XP client being wireless would not benefit much from an upgrade, it is rarely used and it's handy to have a legacy Windows client that isn't a VM for debugging.) HTPC is a recent addition, replacing the DVD player and allowing playback of (all region) DVDs, HD-DVDs and BDs as well as streamed content. XBox 360 Elite was the most recent addition, purchased for its HDMI output and in readiness for Fable II. (Certain types of game just play so much better on a console & big screen TV, but I'll always use my PC for FPS & RTS games.)
  7. If it's a straight batch file then you could add the following to make a conditional jump if the installed OS is not 64-bit: if "%programfiles(x86)%XXX"=="XXX" goto 32BIT Follow this line with your 64-bit specific parts, and where the 32-bit part starts have the following label: :32BIT The following is a complete batch file which just echos to the screen which platform it believes is present, as an example: Edit: An alternative could be to check the environment variable PROCESSOR_ARCHITECTURE, e.g. if "%PROCESSOR_ARCHITECTURE%"=="AMD64" goto 64BIT (Yes, on Intel Core2 Duo CPUs this variable is still reported as AMD64.)
  8. I used Zone Alarm Pro years ago, but found that it got slower and filled with more features that I didn't want in a personal firewall solution and so dumped it once the license expired. Now I just use the built-in Windows Firewall, and rely on: - NAT router to drop external attack attempts before they even reach any clients - Windows Defender and anti-virus for malware detection - UAC to prompt when a program is trying to do "something administrative" (I use Vista) - common sense when browsing, downloading & receiving emails with attachments I don't expect or recognise (As the NAT router takes care of the perimeter, the Windows Firewall is just protecting each client from its peers, just in case something managed to get in and hit one of the clients.)
  9. Only a minidump, so not much info to extract, but it's the same bugcheck and underlying reason - an attempt to free a memory allocation which has already been freed. Can't see from this dump what driver was freeing the memory, but as before it could be the victim not the cause - this allocation was last used for a File object, where before it was related to networking (TCP).The following driver I thought was installed by VMWare for its emulated NIC, but it is still loaded in this dump, and look at the date on it... I don't think this is an onboard device from the last time I checked the specs, so if you don't have one of these installed it may be a good idea to see if it's in Device Manager, and maybe even rename/delete the file on disk to prevent it being loaded.Though it's not a filter driver so I don't see how it should be interfering... I'd stick with the ZoneAlarm plan for now.
  10. I would agree only with "less lines of machine code in a given execution path (i.e. disregarding exception handling code) would run faster than a larger number of lines in the same path" and "more lines of (source or machine) code increases the risk of introducing bugs". However, the (security, stability, extra feature) benefits of the changes/extensions to code (IMO) outweigh the potential performance hit and risk of bugs (as the internal, alpha and beta testing phases before the release candidates will identify and nail the vast majority of the bugs anyway). Here's an example. If I have misunderstood you then I apologize. I was quoting you to agree with your statement ("QFT") - the paragraph you pasted above was actually a reference to the suggestions from others along the lines: "while (1) {smaller = better};".(Sorry, I should have rearranged the paragraphs or spaced better in my response.)
  11. In your situation I would have gone into the BIOS and disabled the unused hardware so it is not even presented to Windows when it comes to device enumeration.The example you gave was machine-specific, and if there isn't an option in the BIOS to disable the onboard SCSI controller then sure, Device Manager would be the way forward - but that has zero impact on system performance and I would say the delay is down to the driver or BIOS, not the OS. Any post-install customization takes you further from the "out of the box" configuration and into territory where all sorts of issues (possibly way, way down the line) can occur - disabling devices in Device Manager to me is a last resort, and disabling Windows services via the Services Control Panel applet is something to look at for servers to be deployed in DMZs or secured environments as a security hardening procedure, not for performance. Yes, you can eke out a relatively small decrease in startup time and virtual memory consumption with knowledge of the OS, your system and what you have changed - but I wouldn't do this for other users' systems (only 3rd party service tweaking when they are causing problems, like someone who had Norton and Norman anti-virus installed at the same time which deadlocked the system ~30 seconds after startup and neither would uninstall properly).
  12. Is it an old SmartStart CD with a recent server, or vice versa? I've noticed that HP tend to drop the older servers from more recent SmartStart CDs. Does this not have the drivers you need?
  13. While the thread is still a "healthy debate" and keeping an eye on the temperature... I wanted to +1 this.The focus appears to have been on components being stripping components from the install media, deselected during install or disabled post-install with a view to saving resources with the expectation that this implicitly leads to better performance. Those "erring on the side of caution" are suggesting that care is taken to measure correctly that there is in fact any difference in performance and also that the end user is aware of and understands what was changed so that future issues that crop up can have their root cause identified more readily. (In the corporate world this awareness changes more to "security hardening" and "group policies" having strange side effects - the latter at least can be filtered out for troubleshooting.) Trying to increase performance through tweaks requires a good understanding of what the components do for you or the system, a simple paragraph that describes what a service does with a recommendation that "it should be okay to disable this - try it and see" doesn't cut it IMO. I feel that Black Viper's list is a collection of such statements that people often follow blindly and acts as a placebo. Performance needs to be measured accurately, with a baseline and changes being made individually to observe their impact - also as Zxian mentioned "startup times" are nowhere near as important as "operational speed" - with S3 sleep mode boot times become completely irrelevant for workstations and I see this being the future, and for most applications once they are loaded into memory their performance is unlikely to be affected by other consumers of virtual memory (as unneeded ones will already have been paged to disk anyway). Trying to measure how optimized a system is based on the amount of memory (physical or virtual) is committed, how long it takes to start up or where CPU cycles are being spent (when not at 100% for long periods of time) can turn out to be inaccurate, so a false economy to try to "fix".
  14. The problem in this dump is a "double free" of a nonpaged pool allocation - a driver has already freed up an allocation and then tries to free it again, so it's not a corruption and not something you can trap easily with a crash dump (if at all). The culprit driver here looks like Zone Labs' vsdatant.sys - I'm guessing Zone Alarm or the security suite. Virtual memory and running process summary shows no particular issue: Did you have a problem with Warcraft 3? There are 2 processes war3.exe, one has an elapsed time of ~4 days and has 0 handles, implying the process did not close properly - the second instance has been running ~18 hours: You also have VMWare installed, so it might be these 2 products (Zone Labs and VMWare) not playing nicely: Onboard Marvell Yukon NIC driver seems pretty recent: Depending on how consistent the dumps are (always have the same stack or the same drivers in the stack, same bugcheck code, etc.) this could be a RAM fault as it's nonpaged pool (resident in physical memory), but I would be more inclined to believe a driver fault. I would go down the route of either uninstalling VMWare to see if the problem goes away, or the Zone Labs software so long as you are behind a NAT router. Or wait until the next dump is produced and we can check for consistency (i.e. always network-related activity on the crashing thread stack). A few hours testing overnight with memtest86 would not be a bad idea either.
  15. I would agree that threads that are in the RUNNING or READY TO RUN states would be consuming and queued to consume CPU cycles respectively, but a process in the "Running" state does not mean it is consuming CPU time.A process can be seen as a "container" for threads, which actually execute on processors. A thread in the WAIT state will not be scheduled for execution - it is an indication that it is waiting for something else to occur before its state is changed and it is poked to continue execution. So unused, passive processes (such as a lot of services) don't waste CPU cycles. As for memory consumed, Windows is a virtual memory-based OS and if a process consumes physical RAM and does not use it then over time it will be paged to disk (to the page file) and have a tiny footprint (working set). Checking the CPU usage and working sets of processes shortly after a boot is not a good way to measure them. By all means disable services that you know you do not need (WZC on systems with no wireless or 802.1x requirements), but be wary of disabling anything & everything - you don't necessarily know what the system itself, or future products you come to install, require.
  16. You live & learn, cheers Though in practicality I don't think I've seen a kernel dump larger than ~800MB even from x64 Server systems.
  17. Before a user logs on, the profile that is used is the SYSTEM.To define the locale settings for the SYSTEM account and Default Users accounts on XP with the default Control Panel layout: - click "Date, Time, Language and Regional Options" - click "Change the format of numbers, dates, and times" - configure the regional settings as required, including the keyboard and input language via the Details button on the Language tab - click the Advanced tab - check the box "Apply all settings to the current user account and to the default user profile", the following popup box appears: "You chose to apply these settings to the default user profile. These settings will affect the logon screen and all new user accounts. Some system services may require you to restart your computer before the changes will take effect." - click OK on the message, then OK or Apply on the previous window, then restart
  18. That's fine as is it, no need to change it from those settings - the page file is on the boot drive and can grow to at least 2098MB (2048+50) - which is also the largest you could possibly need for a kernel dump on a 32-bit system too.
  19. It scans through files on disk and creates a catalog containing keys on which you can search, so you can locate files by contents, not just names.
  20. This was the main reason, I believe, for introducing User Principal Names (UPNs): http://support.microsoft.com/kb/243280Users have an easier time remembering email addresses, so these can be added to their user accounts in AD to make their logon life easier - might be something worth looking into.
  21. I must have failed to explain the use of idel time correctly - the OS does not use all idle time to achieve its goals - in fact very little proportionally, plus if the system is given something else to do (user-initiated or based on a schedule) then prefetching and indexing have much lower process and I/O priorities so they get suspended again until the system has been idle for some time.There isn't an endless amount of work to be done when idle, and systems spend a huge percentage of their time idle (servers and workstations alike) - there is plenty of proof for this given the number of dumps observed which record the "true" idle time statistics. Don't think of the OS using idle time in terms of SETI or Folding @ Home projects - those are designed to use all idle CPU cycles. It would make the concepts of SpeedStep and ACPI redundant! Okay, so your file system was designed in an organized fashion - you're most likely in the minority of users worldwide - but you're also missing the point of the Windows Search service in Vista being designed around iFilters - so content of any type can be indexed, not just files.So the Office team produced iFilters for their products, and now you can search inside emails held in Outlook with the same interface - 3rd parties can produce iFilters for their proprietary file formats to leverage the power of the functionality provided by the OS, instead of having to write their own engine. Files can be tagged with metadata for use in searching, so you can locate images and videos through Search too - so for those times when a photo falls in multiple categories and your file system rigidly makes you put it into one folder, or have multiple copies of the same photo. So I would disagree that the indexing service is a waste for most users - quite the opposite. How do you know they were unnecessary, and did you look at the actual schedule to see how often they would run?
  22. Or create a brand new group "Restricted Users", make the others members of this group, then use this new group for your explicit Deny ACLs? Much easier to control who is then actively banned from doing specific functions without risking too much impact, as Deny takes precedence over Allow.
  23. Indexing, backups or system scans can "access" a large number of files to update that time stamp to the same (or very close) value. Something like Windows (or another) Desktop Search tool, some backup or system restore utility, or anti-virus full system scan, for example.
  24. I put some instructions for setting up Process Monitor here - and if you scroll up a little bit on that page there is a link to a quick search for finding where you can upload to various free file hosting sites. It is MUCH easier to read ProcMon logs if you don't have lots of processes running at the same time while taking the trace, btw. Keep the trace as short as possible - start logging, reproduce the error message and then stop logging as quickly as possible.
  25. Snipped to the extreme, just to show you what you're saying. If it works, then PC isn't idle.That's kind of pedantic - when your system has had nothing to do for a period of time, it starts to do something proactive, and it does not use all idle time to constantly shuffle data from disk into RAM. I might be taking this example a bit literally, but easier than: right-click the task bar and un-tick the box "Show window previews (thumbnails)" on the Properties tab? But it's also not safe to assume people have the knowledge of how to locate the manufacturer's websites to get the drivers, or the website is available when needed, or that they have internet access, or that the driver packages can be silently installed (in the case of corporate deployments)...Plus those annoying situations when doing rebuilds and you fall at hurdle 1 because of the lack of in-box disk controller driver, or at hurdle 2 because of the lack of in-box NIC driver meaning you can't connect to the Internet to get the remaining drivers... The value of a large collection of in-box drivers is immeasurable when they are needed, and the cost (even when not needed) is comparatively trivial. In quote 1 you seem to be pushing for an easier user experience, then in the second quote exactly the opposite? QFT - smaller code implies less boundary checking which implies possibly more (not less) vectors for malware or stability problems.I would agree only with "less lines of machine code in a given execution path (i.e. disregarding exception handling code) would run faster than a larger number of lines in the same path" and "more lines of (source or machine) code increases the risk of introducing bugs". Hhowever, the (security, stability, extra feature) benefits of the changes/extensions to code (IMO) outweigh the potential performance hit and risk of bugs (as the internal, alpha and beta testing phases before the release candidates will identify and nail the vast majority of the bugs anyway).
×
×
  • Create New...