Jump to content

CoffeeFiend

Patron
  • Posts

    4,973
  • Joined

  • Last visited

  • Donations

    0.00 USD 
  • Country

    Canada

Everything posted by CoffeeFiend

  1. I'd need a loupe with that For everyday tasks like you mentioned? For sure. Yep. Hard to say exactly how fast it is compared to C2D's, and there don't really seem to be any benchmarks of it anywhere. Either ways, it should be plenty for most stuff. As long as you don't plan on doing a lot of CPU-bound battery-melting work on it it should be fine, else go for a faster CPU and possibly an extra battery...
  2. CL5 isn't bad, it's the usual stuff most people buy nowadays. @ 800MHz that's 12.5ns latency. It's no worse than DDR2-667 CL4 really (I figure that's what you might be using looking at your sig). It more than makes up for the extra half nanosec in clock speed -- 4% higher latency, but clocked 20% faster. With some luck, his RAM could work at CL4 (my OCZ plat XTC rev 2 does), or at least OC to 900+ (still at CL5), making it faster in both latency and clock speed. Some ppl even manage 1000MHz @ CL4 with that RAM, albeit at a higher voltage. PC2-6400 CL5 is around $30/2GB, but PC2-6400 CL3 is nearly impossible to find. The only such RAM in stock I've seen is $225/2GB, which is very expensive considering you can get 2GB of DDR3 PC3-10666 for $85 (pretty much the same latency, but clocked at 1333MHz), or 4GB of DDR3 PC3-12800 CL7 (twice as much RAM, clocked twice as fast i.e. 1600MHz, and with a latency under 9ns) for only $10 more ($235). And while I'm no gamer, I don't think shaving 2.5ns from his RAM latency by replacing it with CL4 ram is gonna make any noticeable difference on anything else than memory benchmarks. As for the vid card benches, no idea, I don't follow these things at all.
  3. Yeah, it thinks it's 13h instead of 13 mins. No big deal, plenty of other ways... Instead of the other formula, in my previous post, enter =A1/86400 (it'll say 0.004513889 -- that's perfectly normal; 86400 seconds in a day, and that's 0.004513889 of that). Now set the format to time, and it'll show minutes:seconds or such. It'll works fine when you do math with it.
  4. Of course there are trials. Then again, you could use a WinXP eval on your box too. Same thing for Vista. And I suppose I could get the 30 day trial of every shareware too, and reformat when those expire. And then again, you could buy a HDTV from a large retailer with a 30 day money back warranty, and return it the 29th day, and do that year-round. And resetting your trial period all the time + reformatting and reinstalling everything more than once a year sure is a big hassle if you ask me, considering all the time it takes to install & configure everything if you use a lot of apps (after 240 days, it finally has pretty much all I need installed, configured & tweaked how it want it...) Besides, it's illegal to do so. Just read the EULA: "Not for Production Use" "Solely for purposes of demonstration and internal testing" "After this time, you will need to uninstall the software or upgrade to a fully-licensed version of Windows Server 2008." -- as in, stop using it, or buying it (NOT reformat as a way to bypass the trial limit) It's wrong on many levels. It's meant for testing ONLY, NOT perpetual use for free.
  5. OMG. That system must be running excruciatingly slow! AVG, Symantec AV, Zone Alarm, STOPzilla, SUPERAntiSpyware, SpamSubtract all running at the same time? That's the most I've ever seen... Plus all the many other startup entries, like Apple's, Photoshop Album's, the Yahoo mail thing, the Acrobat tray icon thing, webshots desktop, evidence eliminator, cinema manager, 5 various processes for HP stuff, VTTimer, media player sharing stuff, tons of wireless tray icon things (Belkin's + Dell's + 3 processes for Broadcom's -- some of these likely aren't legit), plus extra toolbars (like yahoo's), QuickTime, entries for recordnow and nview, the office startup, Google updater stuff, etc. Again, I don't recall ever seen so many startup processes on any computer, ever. You still have some suspicious entries, like this one: O4 - HKLM\..\Run: [20ddfb3d] rundll32.exe "C:\WINDOWS\system32\tejvfwey.dll",b but when thing start to be so bad that drives are missing and such, and that it's loaded with unnecessary processes like that too, you just might be better off reinstalling clean, and trying not to get infected like that next time. It looks like you're an IE user, and that's where most of that nasty stuff came from (lots of BHO's namely). Ditch IE, and all that nonsense will stop for good, and then you won't need all them antispyware things and what not.
  6. Same here. I've had firefox 2 + vmware + photoshop cs3 + visual studio 2008 + many more open at once (sometimes using them while encoding xvid's with VirtualDubMod), even on 2GB, and it worked just fine... 4GB should be a complete non-issue. Photoshop isn't really incredibly memory intensive anymore. It's not quite like it was back when we only had 3-digit RAM sizes (most of which being used by the kernel) where every MB counted... PS CS3 opens in 90MB of RAM and even with a decently sized pic loaded, it only shoots up to 110MB or so. Add a couple layers, and you still don't even hit 150MB. It's not a big concern by today's standards (especially not if you have 4GB+). Games wise I dunno, I don't play any, but then again you could basically buy a PS3 (40GB) + a Xbox360 Pro + a Wii for the price of a Win 2008 license. That should have most of your gaming needs met... Perhaps, but then people would have a bunch more questions to answer during the install... And components installed shouldn't make much of a difference except for disk space. Services running by default perhaps would (then again, the same thing applies to XP). And realistically, it's not much more work to disable stuff post-install, just like we've been doing with XP anyways (like disabling system restore and such things). Personally I don't really disable much anyways (system restore, the defender stuff, and that's about it). Either ways, for those who want to do that, there's apps like vlite I guess.
  7. I don't know what you're doing then, because I have it booting in 365MB of RAM, so that should leave you with like 3700MB for your apps. Unless you're referring SuperFetch preloading (caching) things in RAM as killing it? I don't know... It runs just fine even on 2GB. It might be worse if you're using media center or such, but overall it's quite good. And Vista would be tuned for slowness and unreliability? What makes the server versions more reliable is things like video acceleration disabled i.e. at the expense of speed, things like audio being disabled, and the lack of user apps running (and the user behind the keyboard doing strange things). Similarly, if you'd leave a XP or Vista box alone, just to serve files on your network or such, you'd see record uptimes too. Ditto. It's basically Vista + extra server components (e.g. active directory) and some limits removed (e.g. IIS 7's conn limits). The extra server components sure don't make the system any faster. And it's definitely not worth the price tag. Plus, I remember back then trying to use Win 2003 as a desktop (needed IIS 6 to test stuff and XP comes with crippled 5.1) and it wasn't exactly a great experience. I had to edit msi's for installers to even run, I had to use compatibility mode on a lot of apps and all that -- not counting those that couldn't possibly work (using things that were incompatible with 2003), apps that refused to install (you need to buy the expensive server version of it), and those I've given up onto (didn't want to start using apps like TweakNT for them to run). And there was basically zero performance benefits. That's also not counting all the extra work to make win 2003 even usable as a desktop in the first place (enable sound, video acceleration, disable shutdown tracker, etc) -- that was certainly more work to configure than it is to get Vista "tuned" in the first place.
  8. I think this is quite clear: The same applies to Vista.
  9. As in, you're still going to have patches for a few years, yes. It's also going to be available (the home version) to system builders for the next 6 months, and a little longer exclusively (again, home version only) for ultra-low-cost PCs (dirt cheap low-spec'ed portables, like the Eee PC and the OLPC) But it's very much on its way out. As MS' own web page says: So whatever's on the shelves now is all that's left pretty much.
  10. One out of several ways to do it. Assuming 418 is in cell A1: =INT(A1/60)&":"&INT(MOD(A1,60)) i.e. you use mod (modulo) to get the remainder of a division by 60 to get the seconds.
  11. I've seen that happen lots actually. Most of the time, the problem wasn't actually with the NIC, but rather the switch at the other end -- especially if it's a consumer-level router (d-link being particularly bad). The switch chips used in them tend to be quite poor (everything to save a half penny!) You try to push a few GB through them, and then they overheat & reset, dropping your network connection in the process. Usually, if you wait long enough (so it cools down), the network connection re-establishes itself... until the next time you try to transfer anything. I'm not saying it's what you're experiencing, but it happens quite a bit. The solution to that is usually a good network switch. But then again, it might be the NIC or drivers too. In addition to what Mr Snrub said, it would be worthwhile to look for an updated driver (never hurts to try, it's easy to just hit "roll back" if there's any problem). I'd also peek at the event log for more hints.
  12. Nothing wrong with that. It's my main/fav language personally. Yep. Even crappy old VB6 can manage to do the required API calls... You just have to find which API calls you're going to make, how they have to be made (there's documentation for everything), and then implement code that does that. In most cases, you'll find existing/sample source code that does just that.
  13. Then perhaps you might have picked a bit too complex a task for you to start with... Some people are inevitably going to say C++ for this, but anything will work really. You just need to make the calls to the APIs required properly, and it'll work just fine. Look at VirtualQueryEx, or OpenProcess + ReadProcessMemory, etc.
  14. The split is about 50/50. A year ago, it was like 99/1 in favor of XP. Give it 6 months to a year, and it'll be like 75/25 in favor of Vista. Nothing surprising here. Exactly my setup (actually not even, my CPU isn't exactly top notch, it's just a lowly E2160), except it's not slow at all (it wasn't sluggish either back when it only had 2GB either). Very responsive, and way more reliable than XP. Totally. Good drivers makes all the difference. Also, criticizing gaming performance (how many FPS they get in games, as seen in benchmarks) isn't really a valid point anymore. Drivers have gotten better, and the problem is solved for the most part. As for software not working, I have to call FUD on that one. Just see how very little software ended up being added to this thread (no titles named here either)...
  15. AMD announced 2nd quarter losses of 1.2 Billion $ -- the 7th quarter in a row with losses. Their CEO resigned. He's being replaced by Dirk Meyer, an engineer and AMD employee. As some news state, the change in CEO won't really change anything. Their stock just went down 12% too. It's not looking good for AMD. And again, nehalem is just around the corner, etc. Multiple news sources
  16. You do realize the last post in this thread was 1372 days old (almost 4 years ago)? He just might have found his answer since then...
  17. The thing is, like a LOT companies who sells PSUs, they don't actually make the units themselves. The low-end Cooler Master PSUs are built by a different OEM, which is of not so great quality. There's a large number of companies who sell products like that, ranging from low-end/not exactly great PSUs, to more expensive quality units. Then again, there is somewhat reputable brands (e.g. enermax) that makes some no so great products (I've seen so many dead enermax PSUs it's not even funny, and many of their designs aren't that great, like the 1000w galaxy, very high ripple on that) It all comes down to who made it, the design they used, and the parts used (if they skimped somewhere to save a buck). Whose name is put on it (rebadged) doesn't matter really, it's just a sticker. I didn't 4x19A = 76A, but combined is 54A like I said (648W total -- the limit here is how much power the transformer's secondary can deliver). Like I said before, I'm not a complete moron I've built SMPS'es from Maxim ICs + International Rectifier's MOSFETs (good old IRF series -- great for H bridges too) and even classic chips like the ICL7660 (you know, back in the day when we still had ECG and NTE books?) Designing & building power supplies and audio amps was one of my hobbies in the 90's Actually, not so much. Multiple rails is in fact better in most cases. This way, even if my hard drives spinning up suck the power so badly that the voltage went too low or such (not the case, but let's pretend), then the CPU is still unaffected, as it's fed from another rail (they have individual current limiters). So it all comes down to what you plug on which rail... I'm not too worried about the load of my totally l33t 8500GT that never sees a workload above 1% either
  18. Just a very quick update about that big peak @ 512KB. I did try testing 3 different combinations: 1024KB reads + 1024 writes (was pretty slow, nearly as bad as 512/512) 1024KB reads + multiple smaller 64KB writes (just like Vista does) 10MB read + 10MB write (like teracopy does) The first is very slow like I found out earlier (same figures) The second (1024/64) is a great deal faster -- from about 60% faster (on 700MB file transfers) to nearly 4x faster (on 50MB files). So it must be the larger writes that Vista isn't liking (again, probably the write-behind cache filling up). That nasty peak in the curve pretty much vanishes if you keep the write size smaller seemingly (larger reads don't seem to be problematic) And the third (10MB/10MB), which ranges from not very fast at all (only 15% faster than the 1024/1024 fiasco for 50MB files) to about the same as 1024/64 on very large files (60% faster on 700MB files), so I'm not really seeing any benefits to their approach yet (however, I'm repeating one 10MB read + one 10MB write, and not 3 reads then 3 writes like them, so that might change things a little bit). I haven't benchmarked teracopy either, but it's not looking like it would be any faster than vista's built-in file copy right now. So perhaps I'll break the benchmark down in 4 distinct parts: multiple small reads for one 64KB write, to see how reading smaller chunks affects performance -- seeking increased lots, likely much slower one large read for multiple 64KB writes, to see how reading bigger chunks affects performance -- seems to help so far, 1MB seems like the sweet spot too one 64KB read for multiple small writes, to see how writing smaller chunks affects performance -- seeking increased lots, likely much slower multiple 64KB reads for one large write, to see how writing bigger chunks affects performance -- 512KB seems to hurt performance a LOT More details, and more graphs later on.
  19. LOL. Don't hold your breath. It's not going to happen: they're not going to sink hundreds of millions of $ on an OS they've replaced and are going to stop selling soon. You want that kernel? It's up to you to make the move to x64...
  20. That's always a bit confusing really. Most new windows versions tend to shuffle things around like this and renaming some stuff. The quickest thing I know of to get to network connections (not start > control panel > network and sharing center > manage network connections, nor right clicking on the systray icon > network and sharing center > manage network connections), is win+r, ncpa.cpl (but yes, you have to remember the name now) And again, that's what it comes down to. We have to stay current It's not just a Vista thing. New desktop OS, new server OS, new Exchange, new SQL Server, new dev tools, new hardware, new technologies, new apps, new ways to do things, new everything... It's a VERY rapidly changing field, and we're all struggling to keep current with everything.
  21. Nah, I just use the basic. The results are perfect for "normal" boxes. This one said 750w, so I figured I'd look at the 750w PSUs at my usual shopping place, and the one I bought was on special for $74. I looked at the specs, and I knew 54A combined (4x 19A) on the 12v rails would be plenty (even if you remove 24A for the 12 HDs spinning up, that still leaves 30A for the CPU and such). The reviews were excellent, and it basically had everything I was wishing for (except perhaps being modular, but the cheapest good 750w modular PSU was like 3x the price, so too bad) Cooler Master Real Power Pro 750w. It's powerful (it's not marketing lies -- it can actually supply 900w for a short amount of time, like drives spinning up), has ~85% efficiency, it has wires for everything (18 total; 6 of which are sata which I needed badly) which are plenty long (up to 31" long) even for a large full tower where the PSU is at the bottom, it has Active PFC, it's not missing a lot of the filtering stuff like many PSUs are, it uses a good design and quality parts (mosfets & diodes that can handle plenty of power -- nothing weak anywhere), it uses decent quality caps rated at good temps, it has excellent voltage regulation & low ripple, it handles brown outs just fine, it has pretty much every optional protection a PSU can offer, it runs quiet, etc. I'm sure there is even better out there, but not for $74
  22. I've been investigating the Vista SP1 file copy performance a bit. The speed is perfectly fine as far as I'm concerned, but I felt like having a look anyways, especially copying large files on the same disk. To quote Mark Russinovich: While copying a file (I've tried sizes ranging from 15MB to 700MB) onto the same disk, explorer.exe: -reads 4x 1MB chunks (initial reads) -writes 64x 64KB chunks (4MB) then it repeats: -reads 1x 1MB chunk -writes 16x 64KB chunks (1MB too) until it reaches the end of the file, where it reads the remainder (not quite 1MB), then writes the rest of the file, still in 64KB chunks The exact same happens when I copy from cmd.exe too. Teracopy works drastically different, it uses a 10MB buffer for both reading and writing. It repeats a "3 consecutive 10MB reads, 3 consecutive 10MB writes" cycle. While I didn't spent time yet timing those exact scenarios (I soon will), I've been doing some tests, using yet another tool I just might share: VFCGB (the name just rolls of the tongue, don't you think? ), or "Vista File Copy Ghetto Benchmark" for those not into over-complicated nonsensical acronyms. You tell it which file to use to make its tests, and how many times to repeat it (e.g. vfcgb.exe somelargefile.ext 10). It will copy the file as many times as you told it, using different sizes of buffers: 1KB, 2KB, 4KB, 8KB, 16KB, 32KB, 64KB, 128KB, 256KB, 512KB, 1024KB and finally 2048KB (12 tests total, each repeated as often as you told it to). And it displays how long it took to copy the file you mentioned, for each pass, and average time (plain old arithmetic mean) + % of difference in speed for each pass compared to the average (e.g. pass 1/10 5% slower then the calculated average) It's fairly basic right now. I'll be adding Vista-style "asymmetric" buffer sizes (1MB read/64KB write) and a test with Teracopy-sized buffers too (10MB), and likely csv exporting or such, for easier analysis of the output data in apps like excel (beats typing the times by hand for sure). I'll also add separate read and write benches too (reading a large file into a buffer of different sizes, and writing a differently sized buffer many times to create a large file). Oh, and eventually some "larger" reads at the beginning, like Vista does to "trigger the Cache Manager read-ahead thread to issue large I/Os" (as Mr Russinovich puts it). Anyways. Here are some preliminary results, on 3 different hard drives -- all SATA with NCQ. I'll be trying an IDE (PATA) drive eventually, and perhaps do some test runs on XP SP3 box just to see how it differs. On the left Explanations: Each curve represents how long it took to complete the file copy operation at different buffer sizes (each curve is actually the average of 10 copy passes) Different curves represent different tests done with files of different sizes, on different drives. I know, it can be hard to follow some of them, but there's 9 curves and I don't do miracles. The red curve, is the average of the 7 tests. The yellow curve is a curve that shows the number of read/write operations required to copy the file. It was added just to see how closely the "average" curve would follow it. Observations: It's no surprise, with tiny buffer sizes (1KB to 8KB or so), copying files is excruciatingly slow. As you increase the buffer size, the file copies happen quite a bit faster. It peaks @ 128KB (fastest). @ 256KB there's not much changed yet. But as you hit 512KB -- OUCH!!!1!one! Copying files using a 512KB buffer is nearly as slow as copying files using a tiny 1KB buffer! And that's the case on every copy I've done, no matter what file size, no matter what hard drive. Perhaps it's the cache manager (write behind?) that's having a hard time to cope? Not sure yet, but the performance is truly awful -- you'd think it would be better than at 256KB really. And once you go past that (bigger buffer sizes again), then the problem starts to go away for some reason. I'll definitely be looking into this! Another thing you can notice is, when you copy bigger files, you tend to need a somewhat bigger buffer to get decent speed (just look at the top 2 curves, they're the two 700MB files). With smaller files, copy performance is still acceptable @ 16Kb, but with them large files, you really want 64KB or more. On the right This is just another representation of the "average" curve (on the left, in red). All the file copy passes (10), for every file, on every drive has been averaged. You can clearly see that file copy operation speed peaks @ 128KB (24%), that means overall it's about 4x faster using that, than with a 1KB buffer, and still nearly 4x as fast as with using a 512KB buffer. It does make me wonder. Vista reading 1MB chunks doesn't initially seem like the best setting for max copy speed, but I'll be looking into that further. Also, I'm not sure why there's that incredible dip in performance @ 512KB. My current guess would be the write-behind cache having issues to keep up with it, and that might explain why Vista is doing 64KB writes (not 1MB) while it does 1MB reads. Since I've been using the same size for read & write buffers so far, we can't be sure yet of which one is causing the slowdown (read? write? both?). And I'll also have to try 10MB buffers like Teracopy. I'll post a follow up later
  23. I was actually going to suggest using that, but IniMod doesn't seemingly support deleting lines from inf files (after all, it's a .ini tool, no an inf one), such as: HKLM,"SYSTEM\CurrentControlSet\Services\gagp30kx,"%gagp30kx_svcdesc% Or at least that's what it seems like -- haven't actually tried. Most ini tools would need a proper ini syntax to work (you'd think so at least), i.e. key=value to modify them, and inf files for the most part aren't like that (no key on most lines!) I haven't looked at the ones you linked to either, perhaps one does the job. Personally, I didn't have to edit/modify an inf file in several years (and even back then, I used notepad for that), so I don't keep up with the list of specialized inf toys. If one of those does what he needs, then by all means... I like writing small tools that scratch a personal itch of mine, but I have no use at all for an inf tool. This script was meant as a specialized tool (a quick 5 minute scripting job) for a very specific purpose (it was simple and did the job just fine), but now that his needs have changed it's probably not the best tool anymore. At this point, it's up to him to try different apps.
  24. It only took a couple minutes to make the changes. No thanks needed Yes, it could be. But I thought all you wanted was to remove a section. Right now I'm not totally sure what your real needs are (probably more than just removing some lines) I mean, if I add this, is the next request going to be "replacing a line" or such? -- not that there's anything wrong with that! But adding features like that one by one is more work than doing a fully featured app in the first place (this way, you're not deleting half your code every single time to add features, and making big changes in things like parsing command line arguments). I'd be better off to rewrite it all from scratch instead of just making changes to this very simple script all the time, and preferably in another language too (most likely as a C# Console App). It's pretty simple to do. I could make a fancy inf tool (with options to add/delete/replace values, lines and such, joining files, and maybe even simple taking commands from text files or such) and write basic documentation for it (i.e. command line usage, how to escape some characters, etc). I'll put it on my (long) list of pet projects. I'll get around to it sometime... I'm not quite sure just how much interest there is in such a tool, nor how useful it would be, so it's not exactly my top priority for now.
  25. On a dual core box that's not memory starved? Vista SP1 for sure. Older machines are likely better off with XP (never tried loading Vista on an old box)
×
×
  • Create New...