Jump to content

Mr Snrub

Patron
  • Posts

    765
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Sweden

Everything posted by Mr Snrub

  1. Sorry, TMI with the TLA, it gets OTT SMB is the protocol used for file sharing, typically TCP port 445 traffic. When you said it was NET USE lines that needed a delay inserted to make work, I assumed I would find the SMB session setup packets and protocol negotiations. Glad you got it sorted though
  2. If the setup process accepted your key and then did not automatically prompt for the R2 disk on first logon, I would guess you used non-R2 install media and VLK. R2 media requires an R2 key, it's not a free addon for the original W2K3 product. If you are certain the media and VLK are R2, then yes you need to contact MS.
  3. Here are the differences in the DHCP Offers: 2k3_wds_intel.pcap 1 16:44:52.598522 0.0.0.0 255.255.255.255 DHCP DHCP Discover - Transaction ID 0xc13982f8 2 16:44:52.599279 192.168.0.6 255.255.255.255 DHCP DHCP Offer - Transaction ID 0xc13982f8 3 16:44:54.587257 0.0.0.0 255.255.255.255 DHCP DHCP Request - Transaction ID 0xc13982f8 4 16:44:54.588081 192.168.0.6 255.255.255.255 DHCP DHCP ACK - Transaction ID 0xc13982f8 This offer contains: Next server IP address: 192.168.0.5 Boot file name: \boot\x86\wdsnbp.com Options: 53 = DHCP Offer 1 = Subnet Mask = 255.255.255.0 58 = Renewal Time Value = 1 hour, 30 minutes 59 = Rebinding Time Value = 2 hours, 37 minutes, 30 seconds 51 = IP Address Lease Time = 3 hours 54 = Server Identifier = 192.168.0.6 6 = Domain Name Server = 192.168.0.5 15 = Domain Name = shark 66 = TFTP Server Name = unclesocks 67 = Bootfile name = \boot\x86\wdsnbp.com ... 11 16:46:08.003550 0.0.0.0 255.255.255.255 DHCP DHCP Discover - Transaction ID 0xffc62717 12 16:46:08.004350 192.168.0.6 255.255.255.255 DHCP DHCP Offer - Transaction ID 0xffc62717 13 16:46:08.004705 0.0.0.0 255.255.255.255 DHCP DHCP Request - Transaction ID 0xffc62717 14 16:46:08.005822 192.168.0.6 255.255.255.255 DHCP DHCP ACK - Transaction ID 0xffc62717 This offer contains: Next server IP address: 192.168.0.5 Boot file name: \boot\x86\wdsnbp.com Options: 53 = DHCP Offer 1 = Subnet Mask = 255.255.255.0 58 = Renewal Time Value = 1 hour, 30 minutes 59 = Rebinding Time Value = 2 hours, 37 minutes, 30 seconds 51 = IP Address Lease Time = 3 hours 54 = Server Identifier = 192.168.0.6 15 = Domain Name = shark 6 = Domain Name Server = 192.168.0.5 ... 50 16:47:20.497913 0.0.0.0 255.255.255.255 DHCP DHCP Discover - Transaction ID 0x316c80a2 51 16:47:20.498615 192.168.0.6 255.255.255.255 DHCP DHCP Offer - Transaction ID 0x316c80a2 52 16:47:20.498928 0.0.0.0 255.255.255.255 DHCP DHCP Request - Transaction ID 0x316c80a2 53 16:47:20.499743 192.168.0.6 255.255.255.255 DHCP DHCP ACK - Transaction ID 0x316c80a2 This offer contains: Next server IP address: 192.168.0.5 Boot file name: \boot\x86\wdsnbp.com Options: 53 = DHCP Offer 1 = Subnet Mask = 255.255.255.0 58 = Renewal Time Value = 1 hour, 30 minutes 59 = Rebinding Time Value = 2 hours, 37 minutes, 30 seconds 51 = IP Address Lease Time = 3 hours 54 = Server Identifier = 192.168.0.6 15 = Domain Name = shark 6 = Domain Name Server = 192.168.0.5 2k8_wds_intel.pcap 81 15:43:24.002765 0.0.0.0 255.255.255.255 DHCP DHCP Discover - Transaction ID 0x36452cae 82 15:43:24.003093 172.0.1.2 255.255.255.255 DHCP DHCP Offer - Transaction ID 0x36452cae 83 15:43:24.003336 0.0.0.0 255.255.255.255 DHCP DHCP Request - Transaction ID 0x36452cae 84 15:43:24.003662 172.0.1.2 255.255.255.255 DHCP DHCP ACK - Transaction ID 0x36452cae This offer contains: Next server IP address: 172.0.1.2 Options: 53 = DHCP Offer 1 = Subnet Mask = 255.255.0.0 58 = Renewal Time Value = 3 days 59 = Rebinding Time Value = 5 days, 6 hours 51 = IP Address Lease Time = 6 days 54 = Server Identifier = 172.0.1.2 15 = Domain Name = WDS.Local 3 = Router = 172.0.1.2 6 = Domain Name Server = 127.0.0.1, 172.0.1.2 44 = NetBIOS over TCP/IP Name Server = 127.0.0.1, 172.0.1.2 ... 265 15:44:45.725266 0.0.0.0 255.255.255.255 DHCP DHCP Discover - Transaction ID 0x88d31781 266 15:44:45.725621 172.0.1.2 255.255.255.255 DHCP DHCP Offer - Transaction ID 0x88d31781 267 15:44:45.725818 0.0.0.0 255.255.255.255 DHCP DHCP Request - Transaction ID 0x88d31781 268 15:44:45.726117 172.0.1.2 255.255.255.255 DHCP DHCP ACK - Transaction ID 0x88d31781 This offer contains: Next server IP address: 172.0.1.2 Options: 53 = DHCP Offer 1 = Subnet Mask = 255.255.0.0 58 = Renewal Time Value = 3 days 59 = Rebinding Time Value = 5 days, 6 hours 51 = IP Address Lease Time = 6 days 54 = Server Identifier = 172.0.1.2 15 = Domain Name = WDS.Local 3 = Router = 172.0.1.2 6 = Domain Name Server = 127.0.0.1, 172.0.1.2 44 = NetBIOS over TCP/IP Name Server = 127.0.0.1, 172.0.1.2 The W2K8 server is offering localhost as primary DNS and WINS server addresses, and no boot filename at all. The W2K3 server is not offering any WINS settings, only a valid DNS server, and a boot file name pointing to "\boot\x86\wdsnbp.com" on 192.168.0.5. Weird thing is, filtering on the client IP address, all I see are NetBIOS broadcasts for name registrations for the workstation name and workgroup - no SMB activity whatsoever. PXE client doesn't seem to like the response coming from the W2K8 configuration - half the information is missing or bad. Unfortunately I know zip about WDS/RIS so I can't point you in the right direction for addressing this - but that is where I would focus my attention: 1. Fix DNS server (remove 127.0.0.1) 2. Fix WINS server (remove) 3. Fix router (remove) 4. Fix boot filename (add)
  4. It's a very expensive way to get the same level of performance that Vista SP1 can provide - they are identical kernels, pretty much only the services installed by default differ... which typically are then added to Server to make it look like Workstation... You are more likely to run into problems with some applications that refuse to run on Windows Server too. The error message being returned is an indication that the OS knows it is still in the middle of a previous role or feature addition, so you can't start another until that completes. I've not heard of your symptom where the feature installation simply becomes unresponsive, to even begin to think what you could have done - if it's not dependent services that have been disabled.
  5. OOBE is the equivalent of "Manage Your Server" - the wizard that launches when an administrator first logs onto a W2K8 server, from where you can also add roles and features.It maybe doesn't launch as the Server Manager wizard is hung in the background. This is the complete reference for servermanagercmd, too much to reproduce here:http://technet2.microsoft.com/windowsserve...3.mspx?mfr=true To be honest, if you've not done much with the server so far, I would repair/clean install it. If it is just for playing around with, fire it up in Virtual PC or something.
  6. What happens if you run oobe.exe? Have you tried servermanagercmd.exe from a command prompt? Did you maybe stop/disable any services manually before adding the features?
  7. You think? Why do you think I put the OS in the first partition on disk0, which is not 100% of the disk?And the remainder of the disk is for non-changing data, so random access to the last 60% of the disk is not going to be an issue for the static data? The applications on disk1 are in different partitions and will not be used concurrently, so no battle for disk I/O or random seeks aplenty there. With 8GB RAM and the fact that the Cache Manager uses virtual memory, the pagefile is going to reside on... drive C:, maybe? Which shares I/O with... the OS... of which large portions are cached... meaning reads from RAM or the pagefile. Conversely, putting the pagefile on a drive other than the boot partition leads to issues with creating dump files, and if it is a separate partition on the same disk as anything else to "avoid fragmentation" then you now increased your average seek time between I/O requests from Cache Manager/Memory Manager and the other partitions. You are thinking short-term here, this strategy is about maintaining a level of performance, not designing a system that runs great from a clean install but over time starts to degrade. I think that is one of the main reasons people have traditionally done clean installs periodically - not because the OS is bloated, but because of how the data is (not) arranged on disk.The main points I was trying to get across that you missed were avoiding bottlenecks by having multiple disks to allow concurrent instead of queued I/O - your setup is how I used to partition single-disk setups years ago. And to cover it one more time, for how I use the computer, my configuration has proven to be the most efficient - there is no "one size fits all" and planning for performance requires analysis. Looking at where your data is placed while taking into consideration how you use it and how the OS uses it, understanding how NTFS writes data and how virtual memory and the Cache Manager work, then testing and monitoring with performance tools to compare strategies is the only way to say what works best for a particular usage scenario. As I tried to make clear, it's "food for thought", not presented as THE way to do it, but some of the things to consider when next doing it.
  8. This one?http://wdc.custhelp.com/cgi-bin/wdc.cfg/ph...hp?p_faqid=1731
  9. 2 options, both requiring a second machine: 1. Use a SPAN or MIRROR port on the switch to duplicate the ingress & egress traffic from the port to which the client is connected, and use NMCAP or WireShark on a machine connected to the SPAN/MIRROR port. 2. Use a hub between the client machine and the switch, and connect the sniffing machine to the same hub to take the trace.
  10. Do you have the model number of the hard disk or a URL to the product on Western Digital's site?Does it definitely state 1.0 is supported, not 1.1? USB 1.0 = "Low Speed" = 1.5Mbps USB 1.1 = "Full Speed" = 1.5Mbps / 12Mbps USB 2.0 = "High Speed" = 1.5Mbps / 12Mbps / 480Mbps 1.5Mbps would be a very, very slow hard disk, which it would have to be running at if your USB port is indeed 1.0...
  11. Quick clarification: a service pack contains all published security hotfixes (GDRs) and non-security hotfixes (LDRs) after the previous fork, up to a certain cut-off date. GDR = General Distribution Release - those hotfixes available through Windows Update LDR = Limited Distribution Release (also known as QFE - Quick Fix Engineering) - those hotfixes are the ones you see KB articles for that state "if you have the symptoms in this KB article, contact Microsoft to obtain the hotfix Jumping from service pack to service pack might feel "cleaner", but bear in mind you are implementing a LOT of changes in one go, and in troubleshooting terms that can be a nightmare. To present the other side of the argument, if you have hundreds of hotfixes installed and come to install a service pack, there is a lot more to back up into uninstall folders to allow a rollback. Eventually, like Windows 2000, XP will be only supported and have hotfixes produced for the last service pack.
  12. Bloat in the system part of the registry is not usually a problem, though occasionally 3rd party products can behave strangely and run into problems indirectly - recursively creating registry keys with seemingly random names through a bug, then attempting to parse the names later and running out of virtual address space, stack space or more critically a system resource. But this type of problem you cannot proactively "clean", and manually pruning a handful of registry keys and their contained values will replace that part of the database with empty space, it doesn't pack it by default to reclaim the space. Bloating of the user profile is usually a more common problem - HKEY_CURRENT_USER is mounted from NTUSER.DAT for the current user, and in some cases that can be corrupted or get too large to mount properly - though that usually leads to a simpler solution of "delete that user's profile". A system state backup will take a snapshot of your registry, so I would recommend using that if concerned about its integrity - especially before adding new hardware or changing drivers, for example... just in case. Back to registry cleaning utilities, beware those that are 32-bit and are not aware of the virtualization that occurs with 64-bit Windows, they can run into a recursion problem when trying to parse "HKLM\SOFTWARE\Wow6423Node"... and you'd better hope the buggy software is not trying to write (or clean) something there...
  13. You exported the registry as a .reg file? As in, the human-readable text file? That's going to be way bigger than the real thing. The registry is a collection of hives and is in effect a database. The files are located in %windir%\system32\config and have no file extension - they are also memory-mapped during the boot of the OS. NTUSER.DAT, from the profile of the currently logged-on user, is the HKCU hive. Here are my relevent hives, but this is a 64-bit system and it's been installed a while so is not clean: C:\Windows\System32\config\ 2008-07-02 22:00 262,144 DEFAULT 2008-07-06 21:08 40,108,032 SOFTWARE 2008-07-06 21:07 28,049,408 SYSTEM C:\Users\Mr Snrub\ 2008-07-06 21:13 2 097 152 NTUSER.DATIf you go looking for your NTUSER.DAT you will need to look for system files using "dir /as" or "attrib" in the relevant folder. http://msdn.microsoft.com/en-us/library/ms724877(VS.85).aspx
  14. In my experience it didn't make a perceptible difference in system performance moving from a single 7200rpm SATA disk to 2x 10000rpm Raptor SATA disks in RAID0.Again, this is my experience from my style of using Windows and applications, so other people may have different experiences. I had XP x64 and Vista x64 on the same RAID set during Vista's beta period and it was clear after a while that Vista's Superfetch out-performs XP's prefetch once it has had chance to build a history on your commonly-used apps. Reading the data from disk before you need it is so much more useful than being able to read it on demand a fraction of a fraction quicker. High RPM disks are useful for sustained, contiguous disk operations - if your data is fragmented then the disk will spend a lot of time seeking inside of reading.If applications read a lot of small files then high speed doesn't help so much. For me, the cost of 10k over 7.2k rpm disks is not worth it, plus the Raptors are noisy beasts.
  15. When creating a partition you specify its size in MB, I couldn't be bothered to calculate 25x1024 so I just rounded down and used the "hard disk manufacturer" version of 1GB = 1,000MB.
  16. The CPU is irrelevant, this issue is that the device does not have an in-box driver for XP SP2.From what you have said so far, we know the device works and the OS can see "something"... what is the motherboard on this computer, does it support USB 2.0 and is the drive USB 2.0 only? Was there a CD provided with the USB drive?
  17. The 137GB barrier problem was resolved in XP SP1, and the symptom was that Windows reported a disk as 137GB instead of its true size. If your disk does not appear at all then a few things need to be checked: - have you tested it in every USB port on the machine, NOT connected to a hub? - do other USB devices work in those same ports? - does the disk work in another PC without a problem? When you connect a USB device, a PnP event is triggered which makes the OS query the device to find out what it is and what driver to install - or prompt the user to provide. Are you getting any local disk activity when the USB disk is connected, even very briefly?
  18. I used to use AVG a few years ago, before I came to the conclusion that free AV software was worth every penny you pay for it One thing to be aware of with AVG 8.0, which would put me off installing it ever again: http://www.theregister.co.uk/2008/06/13/av...raffic_numbers/
  19. This is a quick overview of performance concepts, and my personal approach for how I configure my Vista machines for continual good performance. With a bit of planning beforehand, the system on which the OS runs can provide a way to avoid certain situations where performance can be impacted - some of those situations possibly requiring a rebuild (or at least backup, reinstall & restore) to resolve. You may disagree with my methods, but I'm not presenting this as a "guide 2 make ur computer da fastest" (or to start a big debate), more "food for thought". If I have "heavy background tasks" to perform whilst wanting to play a game, then I give the tasks to one of my other machines. I am not one of those people wants to burn DVDs at maximum speed while playing MP3s, downloading massive files from the Internet, rendering 3D images, chatting on IM and multi-boxing MMOGs - so bear that in mind when thinking how inapplicable this may be to you. I am, however, a gamer who likes to have a browser & mail client up on my second screen while playing games, and I have been building PCs since my first 4.77MHz XT with 640KB RAM, running MS-DOS 3.3. FOREWORD The key to high performance is about avoiding bottlenecks. In terms of raw I/O or operational speed, the order is something like: CPU > CPU cache > RAM > disk & network A CPU bottleneck is resolved by adding more physical or logical processors - most home-built systems will have a single physical processor slot/socket, so the only option (if supported by the motherboard) is to go swap the physical processor for one with more logical processors. (But make sure the bottleneck is not being caused by software bugs first!) CPU cache is a resource on the physical processor itself, and it can improve performance in some situations, but not a resource that tends to generate problems by being exhausted or unused. RAM I/O bottlenecks are resolved by adding more or faster memory sticks, or making sure you take advantage of dual channel if available (i.e. sticks in the right slots, paired). Disk bottlenecks are probably the most common - processes running on the system are waiting for an I/O operation to complete, and when there are lots of queued I/Os or concurrent I/Os on the same channel. There are several ways to avoid disk bottlenecks, depending on the type of issue you are facing. Network bottlenecks are generally not so easy to resolve, but also once you get to GigE network speed (especially with onboard NIC chipsets) you will tend to get a CPU or disk bottleneck first. For gamers a bottleneck may also lie in the graphics adapter, or how it is using system memory rather than its own onboard memory - in this case "bigger, better, faster, more" is pretty much the way forward. Updating graphics card or DirectX drivers can only do so much, don't expect a 10-fold increase in performance from changing driver v104.324.77 to v104.324.91 - unless of course there was something REALLY wrong with the older version. PRE-SETUP Before letting OS installation media get anywhere near a computer, the hardware and its configuration are the first to address. If buying a motherboard or spec'ing an entire system, I look for the price break - where the features & spec of the hardware has price in my range and just before the leap in price where "bleeding edge" technology is found. MEMORY Once the motherboard is identified and research has been done to get other users' experience (to avoid BIOS, performance, feature or driver issues), I look at the maximum amount and speed of RAM the system supports. I install the maximum amount of RAM the system supports, at the fastest speed - RAM is something that is so very useful and these days so cheap to acquire, it doesn't make sense to leave room for expansion later. CPU For the CPU I tend to find the price break again, but also take into account the FSB before the number of cores and amount of cache. The cost of computer kit is ever-declining and the CPU is a trivial single item to update, so I don't waste money now by getting the fastest available today - as not only will it be cheaper in a couple of months, there will most likely by faster ones (which drove down the value of what I bought). For my workstations at home I use dual-core CPUs, as quad-cores at the same FSB tend to be too expensive and would not benefit me for gaming. For servers, on which I tend to run simultaneous virtual machines, I would get the benefit of more cores, so would consider it. There is also the consideration of matching CPU and RAM FSB speeds to aim for a 1:1 ratio, but I honestly have no idea as to the "real world" value of this - that's the kind of analysis they do at Tom's Hardware or such. DISK As I said above, the most common bottlenecks I encounter are in the disk subsystem (YMMV) and there are several causes as well as ways to address this. One of the problems with the legacy IDE controller was that the I/O through the channel (primary or secondary) runs at the speed of the slowest device (master or slave) connected to it, and having 2 devices on 1 channel is a bottleneck. SATA disk controllers have 1 device per channel, as well as being faster throughput, lower power consumption or tidier cabling - so I use SATA hard disks exclusively. If I bother with an internal DVD drive at all, I am not concerned as to whether it is alone on an IDE or SATA port - the bottleneck there is likely to be within the device itself anyway. I have bought a couple of USB 2.0 external DVD-RW drives which I can hook up to whichever machine I need them at that time, to avoid the need to replace/upgrade more internal components. When most people think of improving disk performance they think of RAID, however having tried a 10,000rpm Raptor RAID0 system I am not convinced as to the value of these on home systems and I take a different approach. RAID0 will give you plenty of pure performance for single I/O requests, but can suffer from multiple I/Os on the same channel as much as a regular disk configuration, and doesn't avoid fragmentation problems. RAID1 can, in some systems, give better read performance IF it supports reading from either disk (not all do) - but a write I/O operation is only "complete" once it is written to both disks and so can be slower than a single disk. RAID is taking a single I/O request from a process and converting it into multiple I/Os for the underlying disk subsystem, so isn't a "guaranteed win". The more parties involved in a single I/O, the longer it will take and the more prone to delays or errors it is. Be aware that typical use of a workstation differs from that of a server: - software is installed & uninstalled more frequently - temporary files are created & deleted through user activity more frequently - prefetching is based on user process activity rather then pure file requests Servers can benefit a lot from RAID disk subsystems (SCSI 15,000rpm disks being common). Multiple smaller disks are better than 1 massive disk in terms of performance as the I/O can be split across them simultaneously, if you arrange the data correctly. You can also reduce the problem of fragmentation by physical separation of the data on disk- more of this in the SETUP and POST-SETUP sections, though. RAID0 and multiple individual disks naturally increase the risk of data loss in the event of a single disk failure, but to me having a decent backup strategy is preferable to a zero-downtime system for non-critical environments. The multiple individual disk approach is my favoured at present as the "system down" risk is the single disk with the OS on - and that can be clean-installed in ~35 minutes even from a DVD. SETUP With the hardware in place, setup can begin - and one of the very few questions posed by a Vista/W2K8 installation is the volume on which to install the OS. I take the first 100GB of disk0 - this will be the active, system, boot, pagefile and crash dump volume, and in the same way that changing a hardware configuration is tricky later on, trying to work around the problem of skimping on the boot volume size can lead to a reinstall. My strategy for installation is performance over "best use of resources" - I am not concerned about wasted disk space as much as I am about unused disks. The rest of setup is standard all the way to the desktop appearing. POST-SETUP PARTITIONING My main system at present has 2x 250GB disks, and C: is the only drive letter I use for local hard disk storage... so how do I use the other 400GB? Volume mount points. The remainder of disk0 is partitioned, formatted and mounted as C:\Data - this volume is used for files that will not change often (possibly ever), such as MP3s, pictures, FRAPS movies, ISO images, driver installation folders, etc. All activity on the boot volume will not fragment this static data, nor will the opposite occur as I add more files into C:\Data. As the static data is not accessed frequently, I also don't generate lots more I/O for disk0 - it is going to be almost entirely OS I/O. Disk1 started with no partitions at all, and then I came to install my first application (World of Warcraft), which from experience I know at present take ~8GB. Again, skimping on the space allocated to a volume can cause problems later, so I make sure to give plenty of space for patches, mods and expansion packs - I made the disk GPT format and created a 25000MB simple partition which was then mounted as the empty folder "C:\Program Files (x86)\World of Warcraft". Now I install the application into the new mount point. At this point I have disk0 access almost exclusively for Windows and the in-box utilities, and 25GB of 250GB of disk1 is now allocated exclusively for WoW - remainder of disk1 is neither partitioned or formatted. As the data for WoW is on a separate channel and physical disk from the OS, launching the game is very quick as disk I/O for the OS and the game are occurring in parallel. Also it can never fragment the data of anything else, so the little fragmentation that will occur (patches & expansion packs) will only affect itself and can be addressed specifically with: defrag "C:\Program Files (x86)\World of Warcraft" -w After WoW I installed BioShock, another 25000MB volume on disk1 (this one so far seems to have used <6GB) and a volume mount point of "C:\Program Files (x86)\2K Games\BioShock". Same principle as WoW for performance and fragmentation, and there is no chance I will want to load both at the same time so no I/O bottleneck. And so on, and so forth for Crysis, Mass Effect, Jack Keane, Assassin's Creed and S.T.A.L.K.E.R. - the last chunk of disk1 I mounted as "C:\Program Files\Other Games" for those smaller games that can coexist without heavy fragmentation being likley. A picture is worth 500 DWORDs... Remember, fragmentation isn't just about disk write opeations mixed together - when data is removed from the disk it leaves a hole which is then filled by the next write - so uninstalling a large application months after it was installed can cause the next chunk of data to fill the gap first, fragmenting a brand-new install. So while Crysis might not be fragmenting itself as it's not patched regularly or have masses of addons or saved games, when I come to uninstall it there will not be a fragmentation problem caused by that event. Yes, there is lots of slack/wasted space in my configuration, but it is because I put the data physically as well as logically where I want it. I have no illusions that I will still have most of these games installed in 6-12 months anyway, so they can be uninstalled and the volumes remounted elsewhere as needed. PAGE FILE Where should I put the page file and how big should it be? I always leave it alone, and let Windows determine what it needs and where it should put it. The more customization you do to the OS, the more you have to understand the impact of it - and some decisions you make can lead to problems later. Some apps expect/demand a page file exists, and refuse to launch if you have selected "do not use a page file". For the root cause of a bugcheck to be even approaching likely requires a MEMORY.DMP file - this cannot be created if you've set the page file settings incorrectly (too small, wrong location) or if you don't have the necessary free disk space on the destination volume. Hence the importance of a partitioning strategy before you start. There are those that think the page file gets heavily fragmented and causes a massive reduction in performance - I am not convinced that there is any proof of this on modern machines, a lot of performance issues are actually perception errors. Similarly, fragmentation occurs over time or on a very heavily used & abused system, so running a full system defrag daily is overkill. SUPERFETCH Superfetch is always blamed for performance issues due to it being misunderstood - people see lots of disk I/O, trace it back to this service and disable it thinking it is part of the "bloat" of Windows. It uses lower priority I/Os and idle time to do its disk access to populate the system cache with files that form the pattern of "typical usage" for you - which varies from person to person so is built up over time. If you want to load something from disk, Superfetch's I/Os are pushed to the back of the queue and yours are serviced immediately, so there is no delay but possibly a perception of "omg my disk is being thrashed". The system cache fills up virtual memory, so people panic that Task Manager reports "free" memory as very low - however as the cache is simply data that is already on the disk in its original location, if memory is needed it can be instantly freed and allocated - so effectively "Unused and immediately available memory = Cached + Free". As I type this, of my 8GB physical memory, Cached is 6433MB and Free is 509MB - I happen to know that my system cache is mostly filled up with huge files from World of Warcraft minutes after I boot and log on, making it very fast to launch. TO SLEEP, PERCHANCE TO DREAM "Shave 13.2 seconds off your boot time with this simple list of hacks!"... I never understand these kinds of actions - when programming I looked at optimizing code a lot, and one common trait for optimization is to look not where you get the biggest single saving, but where you get the most overall. i.e. Optimizing a routine that is called one time to get a 3-second saving is not as useful as a 0.1 second saving in a routine called multiple times per second for a few seconds. The boot process is a peculiar one to focus on optimizing - the system is starting from completely empty and has to go through various initialization routines before the user can interact with the OS to start apps running, it needs to do this just once and possibly not again for weeks. Sleep - don't shut down. Suspending your system state to RAM means you should be able to bring it back to operational status in under 2 seconds, complete with the system cache preloaded. Your boot process not only has to go through the initialization and self-checking routines, but it starts with an empty cache, so Superfetch has a lot of work ahead of it when the system is eventually idle - and if you log on & start launching your apps immediately it won't have had the chance to prefetch anything. DISABLING SERVICES I will always advise against the disabling of services on the premise that: - you don't need them - more services running = wasted CPU and memory - they slow down the boot process While you may not explicitly use the services provided by a particular process, without a great deal of experience managing Windows systems you will not know the potential impact of disabling them - maybe not today or next week, but when ApplicationX refuses to install with a random "unspecified error" it could be a pain to track it down to a service that is on by default having been disabled. Services that are not used or do not do anything will consume no CPU time, their threads will be in the WAIT state, not READY TO RUN - also the memory they have allocated is virtual and so eventually will end up paged to disk and not be consuming much precious physical memory. As for slowing down booting, see above - the boot process isn't something that should be occurring that much ideally. Disabling services is for the most part a placebo - yes, there are some situations were services consume masses of CPU time due to bugs, corruptions or misconfigurations but these should be addressed rather than side-stepped. IDENTIFYING BOTTLENECKS Even though this was meant as a way to provide some tips on how to avoid/prevent performance issues, I will mention briefly the tools at your disposal built into Vista to help with identifying which resource is bottlenecked: - Task Manager, for an overview of realtime CPU, virtual memory and network I/O - Resource Monitor, for a detailed view at realtime/recent CPU, disk, network and memory I/O - Performance Monitor, for a longer-term statistical view of system counters for pretty much everything (any counter with "queue" in the name is of particular interest) The Resource Monitor can be interesting to watch after booting Vista - observe how Superfetch starts to load files it thinks for apps it thinks you are going to be firing up soon, populating the system cache. Performance tuning, and designing a configuration for peak performance is, like security, a journey and not a destination. There is no guarantee this is how I'll set up my next system, but come Windows 7 I will most likely have some idea of what worked & what didn't - maybe just fine-tuning the volume sizes, for example. How you use a computer can strongly influence how you benefit from different methods of performance tuning - there is no "silver bullet". Always use measured methods to determine performance levels, not user perception - baselining is something I reckon hundreds of thousands of gamers do for their graphics card alone (using 3DMark or similar), but neglect the common components in the system. Sometimes it is worth doing empirical tests such as observing the impact of removing anti-virus and then comparing with another vendor's product - and be aware that in the case of kernel drivers where is a big difference between disabling a product's services and uninstalling it.
  20. [Disclaimer: I know zilch about WoL]Are you sending the magic packet to the DNS name/unicast IP address or the MAC address? I could imagine that if you are using the IP address resolved via DNS then you could have 2 possible issues: - Dynamic DNS registration expires after 60 minutes, so the name no longer resolves to an IP address - DHCP TTL is 60 minutes, so the lease vanishes and no longer associates IP addresses with MAC addresses If you're using the MAC address all the time I guess a network device could be flushing the ARP cache periodically and as the device isn't powered on to respond to ARP queries the MAC is no longer associated with a port... If it is multicast then the problem shouldn't occur if it's caused by a network device I assume. If I was trying to troubleshoot this, I would probably get my hands on a hub (not a switch) and connect the machine to wake up and a second machine to it, then connect the hub to your regular network. On the second machine run NMCAP or WireShark to see if the magic packet is even arriving at the hub after the 60 minutes has passed. If the packets are still hitting the hub then the issue is definitely within the network card/BIOS on the target machine.
  21. I believe that combination is your problem - the text-based setup in XP & 2003 will look for an active partition, and in the absence of one it will label the drives as they are enumerated by the BIOS.http://support.microsoft.com/kb/825668 Overview of PNP enumeration and hard disk drive letter assignments in Windows Server 2003 and Windows XP I think that is also why OEM machines have a system recovery partition or recovery CD - as well as automating the recovery process and avoiding the need to keep hold of the product key somewhere, the fact that it is an unattended installation to a standard hardware platform allows the preparation of the disks during setup. http://support.microsoft.com/kb/928543 How to automatically assign a drive letter by using the Diskpart.exe command on a Windows Server 2003-based computer or on a Windows XP-based computer If the hardware is not known beforehand, it may be that the disks cannot be prepared via a script, so it might not be possible to create generic "semi-automatic setup-to-c" installation media. [Vista plug] That's why the WinPE-based setup for Vista & W2K8 is much better, and the installation letter for the OS will/should be C: regardless of the location of the system partition (where the boot manager resides) and boot partition (where the boot loader and OS reside). You also have a full diskpart utility available to create/delete/modify partitions (including removing the RO flag from broken RAID sets and setting the active partition). [/Vista plug]
  22. Just a quick correction - hard disks are simply leaf node devices on a bus/channel, they don't have drivers themselves. The disk controller chipset is the device which provides the channel(s) on the given bus, and this is what requires a driver - so the type of disk should make no difference to software changes. From what I could see, the Samsung SpinPoint SP0802N is a PATA disk, so it's not even a SATA controller problem (which I would have expected as the SATA bus came along after XP appeared). (The USB root controller is a bit different - it requires a similar driver to enumerate its ports, but then the devices connected to those ports require their own drivers too.)
  23. What PSU are you using in that machine? It may be that it can't provide a consistent supply of juice when the graphics card starts sucking power - and the wattage quoted on the PSU isn't an indication of constant supply, a high quality PSU of a lower wattage can support more devices than a cheap high wattage one. The more hard drives, DVD drives, PCI devices, graphics cards, etc. connected to the system, the higher the drawn power - combine high I/O from multiple devices and it's possible the load exceeds the PSU's capacity.
  24. Got a network trace from the startnet.cmd failing to map the drive, followed by a successful manual mapping from the same client? That would be the best place to start to see what requests & responses are seen - see what's different...
×
×
  • Create New...