
teqguy
MemberContent Type
Profiles
Forums
Events
Everything posted by teqguy
-
Actually, particle densities have everything to do with it. Thermal paste acts as a conduit between the IHS and the base of the heatsink, compensating for any imperfections in either. The more densely packed the particles are, the more material you have to conduct heat through, and thus the higher the efficiency, when compared to other materials in the same area. If you think volume has no effect on heat transience, you're sadly mistaken.
-
Actually, faulty modems and routers are capable of being saturated to the point where they drop their connection. Just because you haven't had that pleasure doesn't mean it doesn't exist.
-
How does what I said contradict that at all?
-
Explain this to me then: http://www.microsoft.com/licensing/highlights/multicore.mspx Read what I mentioned previously, because I distinctly remember specifying that Home does not support multiple processors, but does acknowledge multiple logical processors(otherwise known as cores). While true, this need not be a correction, because it hasn't been mentioned by me or anyone else yet. Actually, for professional software, the only software limitations ARE licensing limitations. Of course, this isn't about how the software was written, but rather how much money they can make from it. I'm going to need some low level(not just task manager) proof on this one, because from what I've read, XP's HAL does not have any core logic as far as distributing its processes or any other processes, for that matter, across any more processors(physical or logical) than physical processor 0.
-
Gdogg, I just explained what is actually going on within Windows and why one would go with the licensing of Professional vs Home.... and yet you went and said exactly the opposite of what I said... without giving any reason as to why your reasoning is true. It's virtually IMPOSSIBLE for your processor to be devoting 50% of each core's resources to one task, because load balancing does not divide the process in half... it just sets the core or thread affinity to one processor, and then sets the affinity of any other processes to the other processor. That processor's architecture doesn't have a shared cache, therefore, processes can't be halved and shared. As far as SMP, load balancing, and thread/core affinity are concerned.... Windows does NOT and never has had any option for any of the sort, outside of one "cluster" configuration in NT4, which pales in comparison to any Linux distro that's oriented toward clustering. Software in the form of applications always has and, for what I'm concerned, always will be responsible for driving how SMP and multi-threading is made use of. Why? Because Microsoft's developers are lazy.
-
Well, not every disk is manufactured the same... and naturally CD-Rs aren't fabricated as well as say, a disk from Microsoft. There's always potential for deviations in the substrate, outer casing, or even the orientation of the disk. Weight also plays a factor in stability when the disk is rotating.
-
Well, if it isn't that, then it's probably not a software issue and therefore must be a hardware issue. It might not be a drive failure, but as others have mentioned, at the cost of DVD writers these days, further investigation isn't worth the time or effort.
-
He might just need to repair or update his ASPI layer: http://www.afterdawn.com/software/cdr_soft..._tools/aspi.cfm
-
This is completely false. Windows XP supports SMP, and thus dual core processors can be effective within XP, provided you have software that actually makes use of it. Furthermore, thinking that just because the application takes advantage of multithreading, the operating system doesn't have to is a completely wrong train of thought, and one that will make the dual core adoption a lot slower. The operating system SHOULD take advantage of multithreading and load balancing, even if it's just devoting one core to the operating system and one to applications.
-
Ramdisk from SuperSpeed Software is probably the easiest to work with, as mounting and configuring drives is all done within a GUI: http://www.superspeed.com/desktop/ramdisk.php
-
For me, putting two large hard drives that you know individually will not perform as well as a lower capacity, single platter hard drive in RAID0 is like beating a dead horse. The most intelligent solution, which is also the solution that allows you to have your cake and eat it too, would be to get two 80GB single platter drives and RAID them, then get a large capacity hard drive for storage. What I do when it comes to encoding is divide the file into 256MB chunks, then devote 384MB of ram to a ramdrive, which I then encode my file to. This improves encoding times tremendously, because the processor isn't waiting on the hard drive to write data. If you have enough memory, the best solution is to put your source and destination files in memory, so the processor is reading from and writing to memory. Although, using the method I described above, you wouldn't have to have a lot of memory.... you would just need to divide the file into smaller portions.
-
Large hard drives are poor performers, period. Platters in general have a deficiency when it comes to reading and writing at their ends, because of the physical structure of the platters themselves(think in terms of a race track). If you look at the HDTach benchmarks on the page you sent me, you'll see that halfway through, the performance drop is tremendously significant. Single platter drives will also demonstrate this gradual decline at the end, but the decline will be less severe.
-
In a home environment, how much of this so called irreplacable data is actually irreplacable?Outside of memories and documents, what else would a home user want to save in say... an earthquake or fire? The better solution for a home user would be to back up data to an external web server, where it could be accessible from anywhere. From what it seems, the data redundancy RAID1 provides is completely superfluous to what you intend to accomplish. The more adequate solution would be to install all of the applications you use on a daily basis and make a complete hard drive backup to either DVD-Rs or another hard drive and store it on a shelf for easy retrieval if something goes wrong. As I mentioned in my first post, large capacity drives make for terrible RAID0 arrays. This is because there's a significant decline in bandwidth when you're on the second platter, which will negate the need for a RAID0 array. Furthermore, as Tom's Hardware demonstrates, NCQ is really not as mature as it should be for it to make that much of a difference. In some random read tests, it did slightly better than with it off, but at the cost of bandwidth. NCQ isn't meant to increase hard drive performance where it's most needed(in read/write bandwidth), but rather in random access times. If you want a completely free alternative to NCQ, just partition your hard drive so you seperate Windows from your applications, documents, and temporary files, then defrag each partition with Power Defragmenter GUI(and contig.exe, found at sysinternals.com) found at http://www.excessive-software.eu.tt. This will ensure that the hard drive will rarely need to perform a random access. There's no such thing as a 300GB Raptor, but if there were, two of them in RAID0 would be slower than one 74GB Raptor.
-
That's not physically possible due to the way software is currently designed. A 50/50 setup would be considered load balancing, which Windows does not manage, just like it doesn't manage thread or core affinity. This is an inherent "bug" in the X2's architecture(similarly to how when using Hyperthreading, Windows will sometimes report both the physical and logical cores at 100% usage, when in fact only the physical core is being used). It's nothing to be alarmed by, though, it's just the way the System Request Interface and Crossbar manage threads. As far as this whole Home and multi-core/multi-processor fiasco is concerned, the most definitive answer I can give you is this: Despite the fact that Home's kernel(which is the same NT/2k-based kernel Professional uses) does offer support for dual core processors, it does not license them, and this is where you'll run into a problem. There are certain applications(mostly in the professional sector) that are very strict when it comes to licensing(as demonstrated by the ludicrous registration/activation processes they make you go through). These applications have the ability to enforce a strict policy on licensing, and thus will either prevent you from using SMP and/or running the application all together. So, for the average joe, this is no problem at all, and they can go ahead and buy a dual core processor and run Home with all of their "average joe oriented" applications and never run into a problem. However, if you're anything but an average joe, you definitely want to be going with Professional.
-
Traditionally, RAID has only been used in environments where cost isn't an issue... and the only reason it's considered cost effective in such environments is because the only faster storage medium is an array of RAM, which of course requires the systems to be run 24/7 or be backed up by battery power. Take into consideration that Raptors promote lower access times, while RAID0 promotes higher bandwidth and then apply each to tasks you do on a daily basis. The point I'm trying to make is that you shouldn't use a RAID array unless it's going to benefit you significantly, because the benefit is limited to specific tasks and performance is not always going to be proportional with what you'd expect.
-
Well, to be brutally honest, dual core in general on a desktop is currently useless. Windows doesn't acknowledge or take advantage of them, the library of software that's able to take advantage of SMP is fairly small or limited to professional applications that hardly ever have a niche in the desktop user(aka average Joe) market. As for the "future proof" mentality... look how far that got all of the people who jumped on the Athlon 64 bandwagon for the security of not being left in the dust by "64bit processing". SSE3 came out a year after x86-64 and many applications have already included support for it, whereas x86-64 is still off to a slow start when it comes to adoption. It's a technology that's been available for over 3 years and has still failed to prove its worth. This is the same road dual core processors are going to have to take... especially considering they're now waiting on x86-64 to get fully implemented before they'll get a taste of the action. However, to answer your question, yes, Professional is required for multi-core/multi-processor setups, as noted on this comparison of Home and Pro: http://www.microsoft.com/windowsxp/pro/how.../choosing2.mspx "Scalable processor support – up to two-way multi-processor support" is the feature you're looking for.
-
End of my third bullet point: "At the very most, you'll notice levels load a second or two faster(which is completely negligible to average human perception)."
-
A few things to note about RAID: + Hardware based RAID requires IDENTICAL hard drives, meaning no deviation in specs, formatting, etc. Software based RAID, such as the striping option in Windows is a little more lenient when it comes to that; you can use mismatched drives, but data redundancy is at a higher risk. + Large capacity hard drives make for absolutely terrible RAID arrays, especially when they're multi-platter drives. Not only is performance reduced significantly, but with a higher capacity, you have more room for data, and thus more data you're able to lose if something fails. + RAID0(otherwise known as striping) offers negligible performance enhancements under normal desktop environments. This is because disk access is almost directly proportional to what is actually happening on the screen, whereas, in a server or workstation, disk access is constant and directly proportional to what is happening in the application. Thus, gaming performance will NOT improve whatsoever. At the very most, you'll notice levels load a second or two faster(which is completely negligible to average human perception). + RAID1(otherwise known as mirroring) offers negligible data redundancy in a desktop environment. This is due to the fact that (A)Most data on a desktop is virtually replaceable and (B)When compared to other methods of data redundancy(ie: Norton Ghost and/or GoBack, DVDRW backups, flash drives, external hard drives with scheduled backups), RAID is simply not cost effective. So, to put it simply, unless you're doing tasks that are hard drive bandwidth intensive, such as rendering or encoding, the difference between RAID and a single hard drive is negligible. If you want better load times, you would be better off buying a Raptor.
-
If you have a motherboard that supports PCIe, I would go for the 7800GT instead. Two 6600GTs in SLi can't compete with the performance of a 7800 unless you have software that actually supports SLi... and even then, the graphical quality of the 7800 is unmatched. Furthermore, SLi is one of the most unnecessary technologies to date, and by supporting it, you're supporting moving PC gaming entirely into the niche market, where only people who spend $2000+ on a computer can afford to play.
-
Actually, the backplate only prevents the motherboard from bowing under stress; it doesn't eliminate stress, it just distributes it more evenly. Heatsinks like the Ninja still have a lot of weight and a very long moment arm, and the backplate doesn't aid in relieving either.
-
Could it be possible that the number of items to list was just set to zero and the list cleared? To be frank, I've never seen a purpose for a "frequently used programs list" or even the quick launch bar, for that matter. After all, desktop items are just as easily accessible, if not moreso.
-
Well, let's take a minute to weigh the positive and the negative: Fixing that one dead pixel or... cracking one or both of the polarized panels and destroying your monitor entirely. I'll leave determining which is which up for interpretation...
-
File size isn't the only measurement of bloat. RegSeeker incorporates a lot of useless features into a fairly small package. Why? It's not because it can or because the developer found it necessary to do so... it's because it fails at what its inherent nomenclature implies it should be adept in, namely, cleaning the registry. It skips the keys it should be removing, sometimes completely trashes the system by removing ones it shouldn't, and worst of all, tends to lose backups so you have no chance of revival.
-
If your panel has an issue with liquid crystal distribution, misalignment of polarization panels, etc, then you're already in serious trouble and should just refer to the warranty. Most manufacturers offer very forgiving warranties these days, so consumers should be making use of them whenever they find a problem with the product. That'll teach them to start making quality products in the first place...
-
I think not, considering a lot of the icons XPize replaces attempts to resemble Vista... and from what I've seen, very few people are going to be displeased with its icons.